00:00:00.000 Started by upstream project "autotest-per-patch" build number 126141 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.036 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.075 Using shallow fetch with depth 1 00:00:00.075 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.075 > git --version # timeout=10 00:00:00.109 > git --version # 'git version 2.39.2' 00:00:00.109 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.146 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.146 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.633 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.643 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.654 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.654 > git config core.sparsecheckout # timeout=10 00:00:03.664 > git read-tree -mu HEAD # timeout=10 00:00:03.679 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.696 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.696 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.782 [Pipeline] Start of Pipeline 00:00:03.797 [Pipeline] library 00:00:03.800 Loading library shm_lib@master 00:00:07.240 Library shm_lib@master is cached. Copying from home. 00:00:07.274 [Pipeline] node 00:00:07.407 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.410 [Pipeline] { 00:00:07.431 [Pipeline] catchError 00:00:07.433 [Pipeline] { 00:00:07.451 [Pipeline] wrap 00:00:07.464 [Pipeline] { 00:00:07.476 [Pipeline] stage 00:00:07.479 [Pipeline] { (Prologue) 00:00:07.499 [Pipeline] echo 00:00:07.501 Node: VM-host-SM9 00:00:07.507 [Pipeline] cleanWs 00:00:07.515 [WS-CLEANUP] Deleting project workspace... 00:00:07.515 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.521 [WS-CLEANUP] done 00:00:07.745 [Pipeline] setCustomBuildProperty 00:00:07.820 [Pipeline] httpRequest 00:00:07.838 [Pipeline] echo 00:00:07.840 Sorcerer 10.211.164.101 is alive 00:00:07.845 [Pipeline] httpRequest 00:00:07.849 HttpMethod: GET 00:00:07.849 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.849 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.851 Response Code: HTTP/1.1 200 OK 00:00:07.851 Success: Status code 200 is in the accepted range: 200,404 00:00:07.852 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.444 [Pipeline] sh 00:00:09.721 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.734 [Pipeline] httpRequest 00:00:09.752 [Pipeline] echo 00:00:09.753 Sorcerer 10.211.164.101 is alive 00:00:09.759 [Pipeline] httpRequest 00:00:09.762 HttpMethod: GET 00:00:09.763 URL: http://10.211.164.101/packages/spdk_182dd7de475bca6e9768a600616eb841d1034467.tar.gz 00:00:09.763 Sending request to url: http://10.211.164.101/packages/spdk_182dd7de475bca6e9768a600616eb841d1034467.tar.gz 00:00:09.769 Response Code: HTTP/1.1 200 OK 00:00:09.770 Success: Status code 200 is in the accepted range: 200,404 00:00:09.770 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_182dd7de475bca6e9768a600616eb841d1034467.tar.gz 00:00:34.341 [Pipeline] sh 00:00:34.623 + tar --no-same-owner -xf spdk_182dd7de475bca6e9768a600616eb841d1034467.tar.gz 00:00:37.168 [Pipeline] sh 00:00:37.445 + git -C spdk log --oneline -n5 00:00:37.445 182dd7de4 nvmf: large IU and atomic write unit reporting 00:00:37.445 968224f46 app/trace_record: add a optional option '-t' 00:00:37.445 d83ccf437 accel: clarify the usage of spdk_accel_sequence_abort() 00:00:37.445 f282c9958 doc/jsonrpc.md fix style issue 00:00:37.445 868be8ed2 iscs: chap mutual authentication should apply when configured. 00:00:37.469 [Pipeline] writeFile 00:00:37.490 [Pipeline] sh 00:00:37.771 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:37.783 [Pipeline] sh 00:00:38.062 + cat autorun-spdk.conf 00:00:38.062 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.062 SPDK_TEST_NVMF=1 00:00:38.062 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.062 SPDK_TEST_URING=1 00:00:38.062 SPDK_TEST_USDT=1 00:00:38.062 SPDK_RUN_UBSAN=1 00:00:38.062 NET_TYPE=virt 00:00:38.062 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.069 RUN_NIGHTLY=0 00:00:38.071 [Pipeline] } 00:00:38.087 [Pipeline] // stage 00:00:38.102 [Pipeline] stage 00:00:38.104 [Pipeline] { (Run VM) 00:00:38.116 [Pipeline] sh 00:00:38.395 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:38.396 + echo 'Start stage prepare_nvme.sh' 00:00:38.396 Start stage prepare_nvme.sh 00:00:38.396 + [[ -n 1 ]] 00:00:38.396 + disk_prefix=ex1 00:00:38.396 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:38.396 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:38.396 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:38.396 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.396 ++ SPDK_TEST_NVMF=1 00:00:38.396 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.396 ++ SPDK_TEST_URING=1 00:00:38.396 ++ SPDK_TEST_USDT=1 00:00:38.396 ++ SPDK_RUN_UBSAN=1 00:00:38.396 ++ NET_TYPE=virt 00:00:38.396 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.396 ++ RUN_NIGHTLY=0 00:00:38.396 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:38.396 + nvme_files=() 00:00:38.396 + declare -A nvme_files 00:00:38.396 + backend_dir=/var/lib/libvirt/images/backends 00:00:38.396 + nvme_files['nvme.img']=5G 00:00:38.396 + nvme_files['nvme-cmb.img']=5G 00:00:38.396 + nvme_files['nvme-multi0.img']=4G 00:00:38.396 + nvme_files['nvme-multi1.img']=4G 00:00:38.396 + nvme_files['nvme-multi2.img']=4G 00:00:38.396 + nvme_files['nvme-openstack.img']=8G 00:00:38.396 + nvme_files['nvme-zns.img']=5G 00:00:38.396 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:38.396 + (( SPDK_TEST_FTL == 1 )) 00:00:38.396 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:38.396 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:38.396 + for nvme in "${!nvme_files[@]}" 00:00:38.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:38.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.396 + for nvme in "${!nvme_files[@]}" 00:00:38.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:38.396 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.396 + for nvme in "${!nvme_files[@]}" 00:00:38.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:38.654 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:38.654 + for nvme in "${!nvme_files[@]}" 00:00:38.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:38.654 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.654 + for nvme in "${!nvme_files[@]}" 00:00:38.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:38.912 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.912 + for nvme in "${!nvme_files[@]}" 00:00:38.912 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:38.912 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.912 + for nvme in "${!nvme_files[@]}" 00:00:38.912 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:39.171 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.171 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:39.171 + echo 'End stage prepare_nvme.sh' 00:00:39.171 End stage prepare_nvme.sh 00:00:39.182 [Pipeline] sh 00:00:39.461 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:39.461 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:00:39.461 00:00:39.461 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:39.461 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:39.461 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:39.461 HELP=0 00:00:39.461 DRY_RUN=0 00:00:39.461 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:39.461 NVME_DISKS_TYPE=nvme,nvme, 00:00:39.461 NVME_AUTO_CREATE=0 00:00:39.461 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:39.461 NVME_CMB=,, 00:00:39.461 NVME_PMR=,, 00:00:39.461 NVME_ZNS=,, 00:00:39.461 NVME_MS=,, 00:00:39.461 NVME_FDP=,, 00:00:39.461 SPDK_VAGRANT_DISTRO=fedora38 00:00:39.461 SPDK_VAGRANT_VMCPU=10 00:00:39.461 SPDK_VAGRANT_VMRAM=12288 00:00:39.461 SPDK_VAGRANT_PROVIDER=libvirt 00:00:39.461 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:39.461 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:39.461 SPDK_OPENSTACK_NETWORK=0 00:00:39.461 VAGRANT_PACKAGE_BOX=0 00:00:39.461 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:39.461 FORCE_DISTRO=true 00:00:39.461 VAGRANT_BOX_VERSION= 00:00:39.461 EXTRA_VAGRANTFILES= 00:00:39.461 NIC_MODEL=e1000 00:00:39.461 00:00:39.461 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:39.461 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:41.988 Bringing machine 'default' up with 'libvirt' provider... 00:00:42.555 ==> default: Creating image (snapshot of base box volume). 00:00:42.555 ==> default: Creating domain with the following settings... 00:00:42.556 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720800266_1ec5e3ab247665392562 00:00:42.556 ==> default: -- Domain type: kvm 00:00:42.556 ==> default: -- Cpus: 10 00:00:42.556 ==> default: -- Feature: acpi 00:00:42.556 ==> default: -- Feature: apic 00:00:42.556 ==> default: -- Feature: pae 00:00:42.556 ==> default: -- Memory: 12288M 00:00:42.556 ==> default: -- Memory Backing: hugepages: 00:00:42.556 ==> default: -- Management MAC: 00:00:42.556 ==> default: -- Loader: 00:00:42.556 ==> default: -- Nvram: 00:00:42.556 ==> default: -- Base box: spdk/fedora38 00:00:42.556 ==> default: -- Storage pool: default 00:00:42.556 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720800266_1ec5e3ab247665392562.img (20G) 00:00:42.556 ==> default: -- Volume Cache: default 00:00:42.556 ==> default: -- Kernel: 00:00:42.556 ==> default: -- Initrd: 00:00:42.556 ==> default: -- Graphics Type: vnc 00:00:42.556 ==> default: -- Graphics Port: -1 00:00:42.556 ==> default: -- Graphics IP: 127.0.0.1 00:00:42.556 ==> default: -- Graphics Password: Not defined 00:00:42.556 ==> default: -- Video Type: cirrus 00:00:42.556 ==> default: -- Video VRAM: 9216 00:00:42.556 ==> default: -- Sound Type: 00:00:42.556 ==> default: -- Keymap: en-us 00:00:42.556 ==> default: -- TPM Path: 00:00:42.556 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:42.556 ==> default: -- Command line args: 00:00:42.556 ==> default: -> value=-device, 00:00:42.556 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:42.556 ==> default: -> value=-drive, 00:00:42.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:42.556 ==> default: -> value=-device, 00:00:42.556 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.556 ==> default: -> value=-device, 00:00:42.556 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:42.556 ==> default: -> value=-drive, 00:00:42.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:42.556 ==> default: -> value=-device, 00:00:42.556 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.556 ==> default: -> value=-drive, 00:00:42.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:42.556 ==> default: -> value=-device, 00:00:42.556 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.556 ==> default: -> value=-drive, 00:00:42.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:42.556 ==> default: -> value=-device, 00:00:42.556 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.556 ==> default: Creating shared folders metadata... 00:00:42.556 ==> default: Starting domain. 00:00:43.932 ==> default: Waiting for domain to get an IP address... 00:01:02.019 ==> default: Waiting for SSH to become available... 00:01:02.019 ==> default: Configuring and enabling network interfaces... 00:01:04.557 default: SSH address: 192.168.121.194:22 00:01:04.557 default: SSH username: vagrant 00:01:04.557 default: SSH auth method: private key 00:01:07.092 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:15.250 ==> default: Mounting SSHFS shared folder... 00:01:15.817 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:15.817 ==> default: Checking Mount.. 00:01:16.752 ==> default: Folder Successfully Mounted! 00:01:16.752 ==> default: Running provisioner: file... 00:01:17.688 default: ~/.gitconfig => .gitconfig 00:01:17.948 00:01:17.948 SUCCESS! 00:01:17.948 00:01:17.948 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:17.948 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.948 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:17.948 00:01:17.958 [Pipeline] } 00:01:17.974 [Pipeline] // stage 00:01:17.981 [Pipeline] dir 00:01:17.981 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:17.983 [Pipeline] { 00:01:17.996 [Pipeline] catchError 00:01:17.997 [Pipeline] { 00:01:18.011 [Pipeline] sh 00:01:18.295 + vagrant ssh-config --host vagrant 00:01:18.295 + sed -ne /^Host/,$p 00:01:18.295 + tee ssh_conf 00:01:21.583 Host vagrant 00:01:21.583 HostName 192.168.121.194 00:01:21.583 User vagrant 00:01:21.583 Port 22 00:01:21.583 UserKnownHostsFile /dev/null 00:01:21.583 StrictHostKeyChecking no 00:01:21.583 PasswordAuthentication no 00:01:21.583 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:21.583 IdentitiesOnly yes 00:01:21.583 LogLevel FATAL 00:01:21.583 ForwardAgent yes 00:01:21.583 ForwardX11 yes 00:01:21.583 00:01:21.598 [Pipeline] withEnv 00:01:21.601 [Pipeline] { 00:01:21.621 [Pipeline] sh 00:01:21.900 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:21.900 source /etc/os-release 00:01:21.900 [[ -e /image.version ]] && img=$(< /image.version) 00:01:21.900 # Minimal, systemd-like check. 00:01:21.900 if [[ -e /.dockerenv ]]; then 00:01:21.900 # Clear garbage from the node's name: 00:01:21.900 # agt-er_autotest_547-896 -> autotest_547-896 00:01:21.900 # $HOSTNAME is the actual container id 00:01:21.900 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:21.900 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:21.900 # We can assume this is a mount from a host where container is running, 00:01:21.900 # so fetch its hostname to easily identify the target swarm worker. 00:01:21.900 container="$(< /etc/hostname) ($agent)" 00:01:21.900 else 00:01:21.900 # Fallback 00:01:21.900 container=$agent 00:01:21.900 fi 00:01:21.900 fi 00:01:21.900 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:21.900 00:01:21.912 [Pipeline] } 00:01:21.932 [Pipeline] // withEnv 00:01:21.940 [Pipeline] setCustomBuildProperty 00:01:21.955 [Pipeline] stage 00:01:21.957 [Pipeline] { (Tests) 00:01:21.975 [Pipeline] sh 00:01:22.251 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:22.521 [Pipeline] sh 00:01:22.796 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.066 [Pipeline] timeout 00:01:23.067 Timeout set to expire in 30 min 00:01:23.068 [Pipeline] { 00:01:23.081 [Pipeline] sh 00:01:23.359 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:23.926 HEAD is now at 182dd7de4 nvmf: large IU and atomic write unit reporting 00:01:23.938 [Pipeline] sh 00:01:24.214 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:24.484 [Pipeline] sh 00:01:24.767 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.042 [Pipeline] sh 00:01:25.322 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:25.581 ++ readlink -f spdk_repo 00:01:25.581 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:25.581 + [[ -n /home/vagrant/spdk_repo ]] 00:01:25.581 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:25.581 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:25.581 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:25.581 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:25.581 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:25.581 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:25.581 + cd /home/vagrant/spdk_repo 00:01:25.581 + source /etc/os-release 00:01:25.581 ++ NAME='Fedora Linux' 00:01:25.581 ++ VERSION='38 (Cloud Edition)' 00:01:25.581 ++ ID=fedora 00:01:25.581 ++ VERSION_ID=38 00:01:25.581 ++ VERSION_CODENAME= 00:01:25.581 ++ PLATFORM_ID=platform:f38 00:01:25.581 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:25.581 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.581 ++ LOGO=fedora-logo-icon 00:01:25.581 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:25.581 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.581 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:25.581 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.581 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.581 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.581 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:25.581 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.581 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:25.581 ++ SUPPORT_END=2024-05-14 00:01:25.581 ++ VARIANT='Cloud Edition' 00:01:25.581 ++ VARIANT_ID=cloud 00:01:25.581 + uname -a 00:01:25.581 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:25.581 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:25.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:25.840 Hugepages 00:01:25.840 node hugesize free / total 00:01:25.840 node0 1048576kB 0 / 0 00:01:25.840 node0 2048kB 0 / 0 00:01:25.840 00:01:25.840 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.099 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.099 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.099 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.099 + rm -f /tmp/spdk-ld-path 00:01:26.099 + source autorun-spdk.conf 00:01:26.099 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.099 ++ SPDK_TEST_NVMF=1 00:01:26.099 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.099 ++ SPDK_TEST_URING=1 00:01:26.099 ++ SPDK_TEST_USDT=1 00:01:26.099 ++ SPDK_RUN_UBSAN=1 00:01:26.099 ++ NET_TYPE=virt 00:01:26.099 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.099 ++ RUN_NIGHTLY=0 00:01:26.099 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.099 + [[ -n '' ]] 00:01:26.100 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.100 + for M in /var/spdk/build-*-manifest.txt 00:01:26.100 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.100 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.100 + for M in /var/spdk/build-*-manifest.txt 00:01:26.100 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.100 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.100 ++ uname 00:01:26.100 + [[ Linux == \L\i\n\u\x ]] 00:01:26.100 + sudo dmesg -T 00:01:26.100 + sudo dmesg --clear 00:01:26.100 + dmesg_pid=5148 00:01:26.100 + sudo dmesg -Tw 00:01:26.100 + [[ Fedora Linux == FreeBSD ]] 00:01:26.100 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.100 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.100 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.100 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.100 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.100 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.100 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.100 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.100 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.100 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.100 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.100 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.100 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.100 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.100 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.100 Test configuration: 00:01:26.100 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.100 SPDK_TEST_NVMF=1 00:01:26.100 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.100 SPDK_TEST_URING=1 00:01:26.100 SPDK_TEST_USDT=1 00:01:26.100 SPDK_RUN_UBSAN=1 00:01:26.100 NET_TYPE=virt 00:01:26.100 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.359 RUN_NIGHTLY=0 16:05:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:26.359 16:05:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.359 16:05:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.359 16:05:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.359 16:05:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.359 16:05:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.359 16:05:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.359 16:05:09 -- paths/export.sh@5 -- $ export PATH 00:01:26.359 16:05:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.359 16:05:09 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:26.359 16:05:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:26.359 16:05:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720800309.XXXXXX 00:01:26.359 16:05:09 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720800309.x4FiE1 00:01:26.359 16:05:09 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:26.359 16:05:09 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:26.359 16:05:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:26.359 16:05:09 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:26.359 16:05:09 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.359 16:05:09 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:26.359 16:05:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:26.359 16:05:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.359 16:05:09 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:26.359 16:05:09 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:26.359 16:05:09 -- pm/common@17 -- $ local monitor 00:01:26.359 16:05:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.359 16:05:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.359 16:05:09 -- pm/common@25 -- $ sleep 1 00:01:26.359 16:05:09 -- pm/common@21 -- $ date +%s 00:01:26.359 16:05:09 -- pm/common@21 -- $ date +%s 00:01:26.359 16:05:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720800309 00:01:26.359 16:05:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720800309 00:01:26.359 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720800309_collect-cpu-load.pm.log 00:01:26.359 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720800309_collect-vmstat.pm.log 00:01:27.345 16:05:10 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:27.345 16:05:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.345 16:05:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.345 16:05:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.345 16:05:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.345 Fri Jul 12 04:05:10 PM UTC 2024 00:01:27.345 16:05:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.345 v24.09-pre-194-g182dd7de4 00:01:27.345 16:05:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:27.345 16:05:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.345 16:05:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.345 16:05:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:27.345 16:05:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.345 16:05:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.345 ************************************ 00:01:27.345 START TEST ubsan 00:01:27.345 ************************************ 00:01:27.345 using ubsan 00:01:27.345 16:05:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:27.345 00:01:27.345 real 0m0.000s 00:01:27.345 user 0m0.000s 00:01:27.345 sys 0m0.000s 00:01:27.345 16:05:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.345 16:05:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.345 ************************************ 00:01:27.345 END TEST ubsan 00:01:27.345 ************************************ 00:01:27.345 16:05:10 -- common/autotest_common.sh@1142 -- $ return 0 00:01:27.345 16:05:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.345 16:05:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.345 16:05:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.345 16:05:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.345 16:05:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.345 16:05:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.345 16:05:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.345 16:05:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.345 16:05:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:27.345 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.345 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:27.913 Using 'verbs' RDMA provider 00:01:43.724 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:55.931 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:55.931 Creating mk/config.mk...done. 00:01:55.931 Creating mk/cc.flags.mk...done. 00:01:55.931 Type 'make' to build. 00:01:55.931 16:05:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:55.931 16:05:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:55.931 16:05:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:55.931 16:05:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.932 ************************************ 00:01:55.932 START TEST make 00:01:55.932 ************************************ 00:01:55.932 16:05:38 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:55.932 make[1]: Nothing to be done for 'all'. 00:02:05.902 The Meson build system 00:02:05.902 Version: 1.3.1 00:02:05.902 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.902 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.902 Build type: native build 00:02:05.902 Program cat found: YES (/usr/bin/cat) 00:02:05.902 Project name: DPDK 00:02:05.902 Project version: 24.03.0 00:02:05.902 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:05.902 C linker for the host machine: cc ld.bfd 2.39-16 00:02:05.902 Host machine cpu family: x86_64 00:02:05.902 Host machine cpu: x86_64 00:02:05.902 Message: ## Building in Developer Mode ## 00:02:05.902 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.902 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.902 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.902 Program python3 found: YES (/usr/bin/python3) 00:02:05.902 Program cat found: YES (/usr/bin/cat) 00:02:05.902 Compiler for C supports arguments -march=native: YES 00:02:05.902 Checking for size of "void *" : 8 00:02:05.902 Checking for size of "void *" : 8 (cached) 00:02:05.902 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:05.902 Library m found: YES 00:02:05.902 Library numa found: YES 00:02:05.902 Has header "numaif.h" : YES 00:02:05.902 Library fdt found: NO 00:02:05.902 Library execinfo found: NO 00:02:05.902 Has header "execinfo.h" : YES 00:02:05.902 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:05.902 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.902 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.902 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.902 Run-time dependency openssl found: YES 3.0.9 00:02:05.902 Run-time dependency libpcap found: YES 1.10.4 00:02:05.902 Has header "pcap.h" with dependency libpcap: YES 00:02:05.902 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.902 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.902 Compiler for C supports arguments -Wformat: YES 00:02:05.902 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.902 Compiler for C supports arguments -Wformat-security: NO 00:02:05.902 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.902 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.902 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.902 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.902 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.902 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.902 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.902 Compiler for C supports arguments -Wundef: YES 00:02:05.902 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.902 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.902 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.902 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.902 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.902 Program objdump found: YES (/usr/bin/objdump) 00:02:05.902 Compiler for C supports arguments -mavx512f: YES 00:02:05.902 Checking if "AVX512 checking" compiles: YES 00:02:05.902 Fetching value of define "__SSE4_2__" : 1 00:02:05.902 Fetching value of define "__AES__" : 1 00:02:05.902 Fetching value of define "__AVX__" : 1 00:02:05.902 Fetching value of define "__AVX2__" : 1 00:02:05.902 Fetching value of define "__AVX512BW__" : (undefined) 00:02:05.902 Fetching value of define "__AVX512CD__" : (undefined) 00:02:05.902 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:05.902 Fetching value of define "__AVX512F__" : (undefined) 00:02:05.902 Fetching value of define "__AVX512VL__" : (undefined) 00:02:05.902 Fetching value of define "__PCLMUL__" : 1 00:02:05.902 Fetching value of define "__RDRND__" : 1 00:02:05.902 Fetching value of define "__RDSEED__" : 1 00:02:05.902 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.902 Fetching value of define "__znver1__" : (undefined) 00:02:05.902 Fetching value of define "__znver2__" : (undefined) 00:02:05.902 Fetching value of define "__znver3__" : (undefined) 00:02:05.902 Fetching value of define "__znver4__" : (undefined) 00:02:05.902 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.902 Message: lib/log: Defining dependency "log" 00:02:05.902 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.902 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.902 Checking for function "getentropy" : NO 00:02:05.902 Message: lib/eal: Defining dependency "eal" 00:02:05.902 Message: lib/ring: Defining dependency "ring" 00:02:05.902 Message: lib/rcu: Defining dependency "rcu" 00:02:05.902 Message: lib/mempool: Defining dependency "mempool" 00:02:05.902 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.902 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.902 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:05.902 Compiler for C supports arguments -mpclmul: YES 00:02:05.902 Compiler for C supports arguments -maes: YES 00:02:05.902 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.903 Compiler for C supports arguments -mavx512bw: YES 00:02:05.903 Compiler for C supports arguments -mavx512dq: YES 00:02:05.903 Compiler for C supports arguments -mavx512vl: YES 00:02:05.903 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.903 Compiler for C supports arguments -mavx2: YES 00:02:05.903 Compiler for C supports arguments -mavx: YES 00:02:05.903 Message: lib/net: Defining dependency "net" 00:02:05.903 Message: lib/meter: Defining dependency "meter" 00:02:05.903 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.903 Message: lib/pci: Defining dependency "pci" 00:02:05.903 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.903 Message: lib/hash: Defining dependency "hash" 00:02:05.903 Message: lib/timer: Defining dependency "timer" 00:02:05.903 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.903 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.903 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.903 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.903 Message: lib/power: Defining dependency "power" 00:02:05.903 Message: lib/reorder: Defining dependency "reorder" 00:02:05.903 Message: lib/security: Defining dependency "security" 00:02:05.903 Has header "linux/userfaultfd.h" : YES 00:02:05.903 Has header "linux/vduse.h" : YES 00:02:05.903 Message: lib/vhost: Defining dependency "vhost" 00:02:05.903 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.903 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.903 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.903 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.903 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.903 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.903 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.903 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.903 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.903 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.903 Program doxygen found: YES (/usr/bin/doxygen) 00:02:05.903 Configuring doxy-api-html.conf using configuration 00:02:05.903 Configuring doxy-api-man.conf using configuration 00:02:05.903 Program mandb found: YES (/usr/bin/mandb) 00:02:05.903 Program sphinx-build found: NO 00:02:05.903 Configuring rte_build_config.h using configuration 00:02:05.903 Message: 00:02:05.903 ================= 00:02:05.903 Applications Enabled 00:02:05.903 ================= 00:02:05.903 00:02:05.903 apps: 00:02:05.903 00:02:05.903 00:02:05.903 Message: 00:02:05.903 ================= 00:02:05.903 Libraries Enabled 00:02:05.903 ================= 00:02:05.903 00:02:05.903 libs: 00:02:05.903 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.903 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.903 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.903 00:02:05.903 Message: 00:02:05.903 =============== 00:02:05.903 Drivers Enabled 00:02:05.903 =============== 00:02:05.903 00:02:05.903 common: 00:02:05.903 00:02:05.903 bus: 00:02:05.903 pci, vdev, 00:02:05.903 mempool: 00:02:05.903 ring, 00:02:05.903 dma: 00:02:05.903 00:02:05.903 net: 00:02:05.903 00:02:05.903 crypto: 00:02:05.903 00:02:05.903 compress: 00:02:05.903 00:02:05.903 vdpa: 00:02:05.903 00:02:05.903 00:02:05.903 Message: 00:02:05.903 ================= 00:02:05.903 Content Skipped 00:02:05.903 ================= 00:02:05.903 00:02:05.903 apps: 00:02:05.903 dumpcap: explicitly disabled via build config 00:02:05.903 graph: explicitly disabled via build config 00:02:05.903 pdump: explicitly disabled via build config 00:02:05.903 proc-info: explicitly disabled via build config 00:02:05.903 test-acl: explicitly disabled via build config 00:02:05.903 test-bbdev: explicitly disabled via build config 00:02:05.903 test-cmdline: explicitly disabled via build config 00:02:05.903 test-compress-perf: explicitly disabled via build config 00:02:05.903 test-crypto-perf: explicitly disabled via build config 00:02:05.903 test-dma-perf: explicitly disabled via build config 00:02:05.903 test-eventdev: explicitly disabled via build config 00:02:05.903 test-fib: explicitly disabled via build config 00:02:05.903 test-flow-perf: explicitly disabled via build config 00:02:05.903 test-gpudev: explicitly disabled via build config 00:02:05.903 test-mldev: explicitly disabled via build config 00:02:05.903 test-pipeline: explicitly disabled via build config 00:02:05.903 test-pmd: explicitly disabled via build config 00:02:05.903 test-regex: explicitly disabled via build config 00:02:05.903 test-sad: explicitly disabled via build config 00:02:05.903 test-security-perf: explicitly disabled via build config 00:02:05.903 00:02:05.903 libs: 00:02:05.903 argparse: explicitly disabled via build config 00:02:05.903 metrics: explicitly disabled via build config 00:02:05.903 acl: explicitly disabled via build config 00:02:05.903 bbdev: explicitly disabled via build config 00:02:05.903 bitratestats: explicitly disabled via build config 00:02:05.903 bpf: explicitly disabled via build config 00:02:05.903 cfgfile: explicitly disabled via build config 00:02:05.903 distributor: explicitly disabled via build config 00:02:05.903 efd: explicitly disabled via build config 00:02:05.903 eventdev: explicitly disabled via build config 00:02:05.903 dispatcher: explicitly disabled via build config 00:02:05.903 gpudev: explicitly disabled via build config 00:02:05.903 gro: explicitly disabled via build config 00:02:05.903 gso: explicitly disabled via build config 00:02:05.903 ip_frag: explicitly disabled via build config 00:02:05.903 jobstats: explicitly disabled via build config 00:02:05.903 latencystats: explicitly disabled via build config 00:02:05.903 lpm: explicitly disabled via build config 00:02:05.903 member: explicitly disabled via build config 00:02:05.903 pcapng: explicitly disabled via build config 00:02:05.903 rawdev: explicitly disabled via build config 00:02:05.903 regexdev: explicitly disabled via build config 00:02:05.903 mldev: explicitly disabled via build config 00:02:05.903 rib: explicitly disabled via build config 00:02:05.903 sched: explicitly disabled via build config 00:02:05.903 stack: explicitly disabled via build config 00:02:05.903 ipsec: explicitly disabled via build config 00:02:05.903 pdcp: explicitly disabled via build config 00:02:05.903 fib: explicitly disabled via build config 00:02:05.903 port: explicitly disabled via build config 00:02:05.903 pdump: explicitly disabled via build config 00:02:05.903 table: explicitly disabled via build config 00:02:05.903 pipeline: explicitly disabled via build config 00:02:05.903 graph: explicitly disabled via build config 00:02:05.903 node: explicitly disabled via build config 00:02:05.903 00:02:05.903 drivers: 00:02:05.903 common/cpt: not in enabled drivers build config 00:02:05.903 common/dpaax: not in enabled drivers build config 00:02:05.903 common/iavf: not in enabled drivers build config 00:02:05.903 common/idpf: not in enabled drivers build config 00:02:05.903 common/ionic: not in enabled drivers build config 00:02:05.903 common/mvep: not in enabled drivers build config 00:02:05.903 common/octeontx: not in enabled drivers build config 00:02:05.903 bus/auxiliary: not in enabled drivers build config 00:02:05.903 bus/cdx: not in enabled drivers build config 00:02:05.903 bus/dpaa: not in enabled drivers build config 00:02:05.903 bus/fslmc: not in enabled drivers build config 00:02:05.903 bus/ifpga: not in enabled drivers build config 00:02:05.904 bus/platform: not in enabled drivers build config 00:02:05.904 bus/uacce: not in enabled drivers build config 00:02:05.904 bus/vmbus: not in enabled drivers build config 00:02:05.904 common/cnxk: not in enabled drivers build config 00:02:05.904 common/mlx5: not in enabled drivers build config 00:02:05.904 common/nfp: not in enabled drivers build config 00:02:05.904 common/nitrox: not in enabled drivers build config 00:02:05.904 common/qat: not in enabled drivers build config 00:02:05.904 common/sfc_efx: not in enabled drivers build config 00:02:05.904 mempool/bucket: not in enabled drivers build config 00:02:05.904 mempool/cnxk: not in enabled drivers build config 00:02:05.904 mempool/dpaa: not in enabled drivers build config 00:02:05.904 mempool/dpaa2: not in enabled drivers build config 00:02:05.904 mempool/octeontx: not in enabled drivers build config 00:02:05.904 mempool/stack: not in enabled drivers build config 00:02:05.904 dma/cnxk: not in enabled drivers build config 00:02:05.904 dma/dpaa: not in enabled drivers build config 00:02:05.904 dma/dpaa2: not in enabled drivers build config 00:02:05.904 dma/hisilicon: not in enabled drivers build config 00:02:05.904 dma/idxd: not in enabled drivers build config 00:02:05.904 dma/ioat: not in enabled drivers build config 00:02:05.904 dma/skeleton: not in enabled drivers build config 00:02:05.904 net/af_packet: not in enabled drivers build config 00:02:05.904 net/af_xdp: not in enabled drivers build config 00:02:05.904 net/ark: not in enabled drivers build config 00:02:05.904 net/atlantic: not in enabled drivers build config 00:02:05.904 net/avp: not in enabled drivers build config 00:02:05.904 net/axgbe: not in enabled drivers build config 00:02:05.904 net/bnx2x: not in enabled drivers build config 00:02:05.904 net/bnxt: not in enabled drivers build config 00:02:05.904 net/bonding: not in enabled drivers build config 00:02:05.904 net/cnxk: not in enabled drivers build config 00:02:05.904 net/cpfl: not in enabled drivers build config 00:02:05.904 net/cxgbe: not in enabled drivers build config 00:02:05.904 net/dpaa: not in enabled drivers build config 00:02:05.904 net/dpaa2: not in enabled drivers build config 00:02:05.904 net/e1000: not in enabled drivers build config 00:02:05.904 net/ena: not in enabled drivers build config 00:02:05.904 net/enetc: not in enabled drivers build config 00:02:05.904 net/enetfec: not in enabled drivers build config 00:02:05.904 net/enic: not in enabled drivers build config 00:02:05.904 net/failsafe: not in enabled drivers build config 00:02:05.904 net/fm10k: not in enabled drivers build config 00:02:05.904 net/gve: not in enabled drivers build config 00:02:05.904 net/hinic: not in enabled drivers build config 00:02:05.904 net/hns3: not in enabled drivers build config 00:02:05.904 net/i40e: not in enabled drivers build config 00:02:05.904 net/iavf: not in enabled drivers build config 00:02:05.904 net/ice: not in enabled drivers build config 00:02:05.904 net/idpf: not in enabled drivers build config 00:02:05.904 net/igc: not in enabled drivers build config 00:02:05.904 net/ionic: not in enabled drivers build config 00:02:05.904 net/ipn3ke: not in enabled drivers build config 00:02:05.904 net/ixgbe: not in enabled drivers build config 00:02:05.904 net/mana: not in enabled drivers build config 00:02:05.904 net/memif: not in enabled drivers build config 00:02:05.904 net/mlx4: not in enabled drivers build config 00:02:05.904 net/mlx5: not in enabled drivers build config 00:02:05.904 net/mvneta: not in enabled drivers build config 00:02:05.904 net/mvpp2: not in enabled drivers build config 00:02:05.904 net/netvsc: not in enabled drivers build config 00:02:05.904 net/nfb: not in enabled drivers build config 00:02:05.904 net/nfp: not in enabled drivers build config 00:02:05.904 net/ngbe: not in enabled drivers build config 00:02:05.904 net/null: not in enabled drivers build config 00:02:05.904 net/octeontx: not in enabled drivers build config 00:02:05.904 net/octeon_ep: not in enabled drivers build config 00:02:05.904 net/pcap: not in enabled drivers build config 00:02:05.904 net/pfe: not in enabled drivers build config 00:02:05.904 net/qede: not in enabled drivers build config 00:02:05.904 net/ring: not in enabled drivers build config 00:02:05.904 net/sfc: not in enabled drivers build config 00:02:05.904 net/softnic: not in enabled drivers build config 00:02:05.904 net/tap: not in enabled drivers build config 00:02:05.904 net/thunderx: not in enabled drivers build config 00:02:05.904 net/txgbe: not in enabled drivers build config 00:02:05.904 net/vdev_netvsc: not in enabled drivers build config 00:02:05.904 net/vhost: not in enabled drivers build config 00:02:05.904 net/virtio: not in enabled drivers build config 00:02:05.904 net/vmxnet3: not in enabled drivers build config 00:02:05.904 raw/*: missing internal dependency, "rawdev" 00:02:05.904 crypto/armv8: not in enabled drivers build config 00:02:05.904 crypto/bcmfs: not in enabled drivers build config 00:02:05.904 crypto/caam_jr: not in enabled drivers build config 00:02:05.904 crypto/ccp: not in enabled drivers build config 00:02:05.904 crypto/cnxk: not in enabled drivers build config 00:02:05.904 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.904 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.904 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.904 crypto/mlx5: not in enabled drivers build config 00:02:05.904 crypto/mvsam: not in enabled drivers build config 00:02:05.904 crypto/nitrox: not in enabled drivers build config 00:02:05.904 crypto/null: not in enabled drivers build config 00:02:05.904 crypto/octeontx: not in enabled drivers build config 00:02:05.904 crypto/openssl: not in enabled drivers build config 00:02:05.904 crypto/scheduler: not in enabled drivers build config 00:02:05.904 crypto/uadk: not in enabled drivers build config 00:02:05.904 crypto/virtio: not in enabled drivers build config 00:02:05.904 compress/isal: not in enabled drivers build config 00:02:05.904 compress/mlx5: not in enabled drivers build config 00:02:05.904 compress/nitrox: not in enabled drivers build config 00:02:05.904 compress/octeontx: not in enabled drivers build config 00:02:05.904 compress/zlib: not in enabled drivers build config 00:02:05.904 regex/*: missing internal dependency, "regexdev" 00:02:05.904 ml/*: missing internal dependency, "mldev" 00:02:05.904 vdpa/ifc: not in enabled drivers build config 00:02:05.904 vdpa/mlx5: not in enabled drivers build config 00:02:05.904 vdpa/nfp: not in enabled drivers build config 00:02:05.904 vdpa/sfc: not in enabled drivers build config 00:02:05.904 event/*: missing internal dependency, "eventdev" 00:02:05.904 baseband/*: missing internal dependency, "bbdev" 00:02:05.904 gpu/*: missing internal dependency, "gpudev" 00:02:05.904 00:02:05.904 00:02:05.904 Build targets in project: 85 00:02:05.904 00:02:05.904 DPDK 24.03.0 00:02:05.904 00:02:05.904 User defined options 00:02:05.904 buildtype : debug 00:02:05.904 default_library : shared 00:02:05.904 libdir : lib 00:02:05.904 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.904 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.904 c_link_args : 00:02:05.904 cpu_instruction_set: native 00:02:05.904 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.904 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.904 enable_docs : false 00:02:05.904 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.904 enable_kmods : false 00:02:05.904 max_lcores : 128 00:02:05.904 tests : false 00:02:05.904 00:02:05.904 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.469 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:06.469 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.469 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.469 [3/268] Linking static target lib/librte_log.a 00:02:06.469 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.469 [5/268] Linking static target lib/librte_kvargs.a 00:02:06.469 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.035 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.035 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.035 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.293 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.293 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.293 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.293 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.293 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.551 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.551 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.551 [17/268] Linking target lib/librte_log.so.24.1 00:02:07.551 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.551 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.551 [20/268] Linking static target lib/librte_telemetry.a 00:02:07.809 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:07.809 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:07.809 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.067 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.067 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.326 [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.326 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.326 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.326 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.326 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.326 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.326 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.583 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.583 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.583 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.583 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.583 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.149 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.149 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.149 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.149 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.149 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.149 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.149 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.149 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.149 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.406 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.406 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.664 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.664 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.664 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.922 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.922 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.179 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.180 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.180 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.180 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.180 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.438 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.438 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.438 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.438 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.696 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.696 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.954 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.213 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.213 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.213 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.213 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.213 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.471 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.471 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.471 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.471 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.729 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.729 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.729 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.988 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.988 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.246 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.246 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.246 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.246 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.505 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.505 [85/268] Linking static target lib/librte_eal.a 00:02:12.505 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.505 [87/268] Linking static target lib/librte_ring.a 00:02:12.764 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.764 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.764 [90/268] Linking static target lib/librte_rcu.a 00:02:13.022 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.022 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.022 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.022 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.022 [95/268] Linking static target lib/librte_mempool.a 00:02:13.306 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.306 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.306 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.564 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.564 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.821 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.821 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.821 [103/268] Linking static target lib/librte_mbuf.a 00:02:13.821 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.079 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.079 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.337 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.337 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.337 [109/268] Linking static target lib/librte_net.a 00:02:14.337 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.595 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.595 [112/268] Linking static target lib/librte_meter.a 00:02:14.595 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.852 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.852 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.852 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.852 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.110 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.110 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.374 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.655 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.655 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.926 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.926 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.926 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.926 [126/268] Linking static target lib/librte_pci.a 00:02:15.926 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.184 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.184 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.184 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.184 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.184 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.184 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.442 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.442 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.442 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.442 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.442 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.442 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.442 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.442 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.442 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.442 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.442 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.442 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.700 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.700 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.700 [148/268] Linking static target lib/librte_ethdev.a 00:02:16.958 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.958 [150/268] Linking static target lib/librte_cmdline.a 00:02:16.958 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.217 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.217 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.217 [154/268] Linking static target lib/librte_timer.a 00:02:17.217 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.217 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.475 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.734 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.734 [159/268] Linking static target lib/librte_compressdev.a 00:02:17.734 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.734 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.734 [162/268] Linking static target lib/librte_hash.a 00:02:17.734 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.734 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.992 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.250 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.250 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.508 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.508 [169/268] Linking static target lib/librte_dmadev.a 00:02:18.508 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.508 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.508 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.508 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.508 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.508 [175/268] Linking static target lib/librte_cryptodev.a 00:02:18.766 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.766 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.024 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.024 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.024 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.024 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.282 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.282 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.282 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.540 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.540 [186/268] Linking static target lib/librte_power.a 00:02:19.798 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.798 [188/268] Linking static target lib/librte_reorder.a 00:02:19.798 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.798 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.798 [191/268] Linking static target lib/librte_security.a 00:02:20.056 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.056 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.315 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.315 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.573 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.573 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.832 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.832 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.832 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.832 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.832 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.091 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.349 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.349 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.349 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.349 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.349 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.607 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.608 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.608 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.608 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.608 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.608 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.608 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.608 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.608 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:21.608 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.608 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.867 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:21.867 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.867 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:21.867 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.126 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.126 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.126 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.126 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.385 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.953 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:22.953 [230/268] Linking static target lib/librte_vhost.a 00:02:23.888 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.888 [232/268] Linking target lib/librte_eal.so.24.1 00:02:23.888 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.888 [234/268] Linking target lib/librte_ring.so.24.1 00:02:23.888 [235/268] Linking target lib/librte_timer.so.24.1 00:02:23.888 [236/268] Linking target lib/librte_meter.so.24.1 00:02:23.888 [237/268] Linking target lib/librte_pci.so.24.1 00:02:23.888 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:23.888 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.147 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.147 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.147 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.147 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.148 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.148 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:24.148 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.148 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:24.406 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.406 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.406 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.406 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.406 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.406 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.666 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.666 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:24.666 [256/268] Linking target lib/librte_net.so.24.1 00:02:24.666 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.666 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.924 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.924 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.924 [261/268] Linking target lib/librte_security.so.24.1 00:02:24.924 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.924 [263/268] Linking target lib/librte_hash.so.24.1 00:02:24.924 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:24.924 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:24.924 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.183 [267/268] Linking target lib/librte_power.so.24.1 00:02:25.183 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.183 INFO: autodetecting backend as ninja 00:02:25.183 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:26.559 CC lib/ut_mock/mock.o 00:02:26.559 CC lib/ut/ut.o 00:02:26.559 CC lib/log/log.o 00:02:26.559 CC lib/log/log_flags.o 00:02:26.559 CC lib/log/log_deprecated.o 00:02:26.559 LIB libspdk_ut.a 00:02:26.559 LIB libspdk_log.a 00:02:26.559 LIB libspdk_ut_mock.a 00:02:26.559 SO libspdk_ut.so.2.0 00:02:26.559 SO libspdk_ut_mock.so.6.0 00:02:26.559 SO libspdk_log.so.7.0 00:02:26.559 SYMLINK libspdk_ut.so 00:02:26.559 SYMLINK libspdk_ut_mock.so 00:02:26.559 SYMLINK libspdk_log.so 00:02:26.818 CC lib/util/base64.o 00:02:26.818 CC lib/ioat/ioat.o 00:02:26.818 CC lib/util/bit_array.o 00:02:26.818 CC lib/util/cpuset.o 00:02:26.818 CC lib/dma/dma.o 00:02:26.818 CC lib/util/crc32.o 00:02:26.818 CC lib/util/crc16.o 00:02:26.818 CXX lib/trace_parser/trace.o 00:02:26.818 CC lib/util/crc32c.o 00:02:26.818 CC lib/vfio_user/host/vfio_user_pci.o 00:02:27.076 CC lib/util/crc32_ieee.o 00:02:27.076 CC lib/util/crc64.o 00:02:27.076 CC lib/util/dif.o 00:02:27.076 CC lib/util/fd.o 00:02:27.076 LIB libspdk_dma.a 00:02:27.076 CC lib/util/file.o 00:02:27.076 CC lib/util/hexlify.o 00:02:27.076 SO libspdk_dma.so.4.0 00:02:27.076 CC lib/util/iov.o 00:02:27.076 LIB libspdk_ioat.a 00:02:27.076 SYMLINK libspdk_dma.so 00:02:27.076 CC lib/util/math.o 00:02:27.076 CC lib/util/pipe.o 00:02:27.076 SO libspdk_ioat.so.7.0 00:02:27.076 CC lib/util/strerror_tls.o 00:02:27.076 CC lib/vfio_user/host/vfio_user.o 00:02:27.335 CC lib/util/string.o 00:02:27.335 SYMLINK libspdk_ioat.so 00:02:27.335 CC lib/util/uuid.o 00:02:27.335 CC lib/util/fd_group.o 00:02:27.335 CC lib/util/xor.o 00:02:27.335 CC lib/util/zipf.o 00:02:27.335 LIB libspdk_vfio_user.a 00:02:27.335 SO libspdk_vfio_user.so.5.0 00:02:27.594 SYMLINK libspdk_vfio_user.so 00:02:27.594 LIB libspdk_util.a 00:02:27.594 SO libspdk_util.so.9.1 00:02:27.854 SYMLINK libspdk_util.so 00:02:27.854 LIB libspdk_trace_parser.a 00:02:27.854 SO libspdk_trace_parser.so.5.0 00:02:27.854 CC lib/rdma_provider/common.o 00:02:27.854 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:28.114 CC lib/vmd/vmd.o 00:02:28.114 CC lib/rdma_utils/rdma_utils.o 00:02:28.114 CC lib/env_dpdk/env.o 00:02:28.114 CC lib/vmd/led.o 00:02:28.114 CC lib/idxd/idxd.o 00:02:28.114 SYMLINK libspdk_trace_parser.so 00:02:28.114 CC lib/idxd/idxd_user.o 00:02:28.114 CC lib/conf/conf.o 00:02:28.114 CC lib/json/json_parse.o 00:02:28.114 CC lib/json/json_util.o 00:02:28.114 LIB libspdk_rdma_provider.a 00:02:28.114 CC lib/json/json_write.o 00:02:28.114 SO libspdk_rdma_provider.so.6.0 00:02:28.114 LIB libspdk_conf.a 00:02:28.114 CC lib/idxd/idxd_kernel.o 00:02:28.373 LIB libspdk_rdma_utils.a 00:02:28.373 SYMLINK libspdk_rdma_provider.so 00:02:28.373 SO libspdk_conf.so.6.0 00:02:28.373 CC lib/env_dpdk/memory.o 00:02:28.373 CC lib/env_dpdk/pci.o 00:02:28.373 SO libspdk_rdma_utils.so.1.0 00:02:28.373 SYMLINK libspdk_conf.so 00:02:28.373 CC lib/env_dpdk/init.o 00:02:28.373 SYMLINK libspdk_rdma_utils.so 00:02:28.373 CC lib/env_dpdk/threads.o 00:02:28.373 CC lib/env_dpdk/pci_ioat.o 00:02:28.373 CC lib/env_dpdk/pci_virtio.o 00:02:28.373 LIB libspdk_json.a 00:02:28.373 SO libspdk_json.so.6.0 00:02:28.373 CC lib/env_dpdk/pci_vmd.o 00:02:28.631 CC lib/env_dpdk/pci_idxd.o 00:02:28.631 CC lib/env_dpdk/pci_event.o 00:02:28.631 LIB libspdk_idxd.a 00:02:28.631 SYMLINK libspdk_json.so 00:02:28.631 CC lib/env_dpdk/sigbus_handler.o 00:02:28.631 SO libspdk_idxd.so.12.0 00:02:28.631 LIB libspdk_vmd.a 00:02:28.631 SO libspdk_vmd.so.6.0 00:02:28.631 CC lib/env_dpdk/pci_dpdk.o 00:02:28.631 SYMLINK libspdk_idxd.so 00:02:28.631 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.631 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.631 SYMLINK libspdk_vmd.so 00:02:28.890 CC lib/jsonrpc/jsonrpc_server.o 00:02:28.890 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:28.890 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:28.890 CC lib/jsonrpc/jsonrpc_client.o 00:02:29.148 LIB libspdk_jsonrpc.a 00:02:29.148 SO libspdk_jsonrpc.so.6.0 00:02:29.148 SYMLINK libspdk_jsonrpc.so 00:02:29.406 LIB libspdk_env_dpdk.a 00:02:29.406 CC lib/rpc/rpc.o 00:02:29.663 SO libspdk_env_dpdk.so.14.1 00:02:29.663 LIB libspdk_rpc.a 00:02:29.663 SYMLINK libspdk_env_dpdk.so 00:02:29.920 SO libspdk_rpc.so.6.0 00:02:29.920 SYMLINK libspdk_rpc.so 00:02:30.176 CC lib/keyring/keyring.o 00:02:30.176 CC lib/keyring/keyring_rpc.o 00:02:30.176 CC lib/notify/notify.o 00:02:30.176 CC lib/notify/notify_rpc.o 00:02:30.176 CC lib/trace/trace.o 00:02:30.176 CC lib/trace/trace_flags.o 00:02:30.176 CC lib/trace/trace_rpc.o 00:02:30.176 LIB libspdk_notify.a 00:02:30.434 SO libspdk_notify.so.6.0 00:02:30.434 SYMLINK libspdk_notify.so 00:02:30.434 LIB libspdk_keyring.a 00:02:30.434 LIB libspdk_trace.a 00:02:30.434 SO libspdk_keyring.so.1.0 00:02:30.434 SO libspdk_trace.so.10.0 00:02:30.434 SYMLINK libspdk_keyring.so 00:02:30.434 SYMLINK libspdk_trace.so 00:02:30.692 CC lib/thread/thread.o 00:02:30.692 CC lib/thread/iobuf.o 00:02:30.692 CC lib/sock/sock.o 00:02:30.692 CC lib/sock/sock_rpc.o 00:02:31.257 LIB libspdk_sock.a 00:02:31.258 SO libspdk_sock.so.10.0 00:02:31.258 SYMLINK libspdk_sock.so 00:02:31.515 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:31.515 CC lib/nvme/nvme_fabric.o 00:02:31.515 CC lib/nvme/nvme_ctrlr.o 00:02:31.515 CC lib/nvme/nvme_ns_cmd.o 00:02:31.515 CC lib/nvme/nvme_ns.o 00:02:31.515 CC lib/nvme/nvme_pcie.o 00:02:31.515 CC lib/nvme/nvme_pcie_common.o 00:02:31.515 CC lib/nvme/nvme_qpair.o 00:02:31.515 CC lib/nvme/nvme.o 00:02:32.449 CC lib/nvme/nvme_quirks.o 00:02:32.449 CC lib/nvme/nvme_transport.o 00:02:32.449 CC lib/nvme/nvme_discovery.o 00:02:32.449 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.449 LIB libspdk_thread.a 00:02:32.449 SO libspdk_thread.so.10.1 00:02:32.449 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.449 CC lib/nvme/nvme_tcp.o 00:02:32.449 SYMLINK libspdk_thread.so 00:02:32.707 CC lib/accel/accel.o 00:02:32.707 CC lib/nvme/nvme_opal.o 00:02:32.707 CC lib/blob/blobstore.o 00:02:32.964 CC lib/nvme/nvme_io_msg.o 00:02:32.964 CC lib/accel/accel_rpc.o 00:02:32.964 CC lib/nvme/nvme_poll_group.o 00:02:33.222 CC lib/accel/accel_sw.o 00:02:33.222 CC lib/blob/request.o 00:02:33.222 CC lib/blob/zeroes.o 00:02:33.479 CC lib/blob/blob_bs_dev.o 00:02:33.479 CC lib/init/json_config.o 00:02:33.479 CC lib/init/subsystem.o 00:02:33.479 CC lib/virtio/virtio.o 00:02:33.737 CC lib/virtio/virtio_vhost_user.o 00:02:33.737 CC lib/virtio/virtio_vfio_user.o 00:02:33.737 CC lib/init/subsystem_rpc.o 00:02:33.737 CC lib/init/rpc.o 00:02:33.737 CC lib/virtio/virtio_pci.o 00:02:33.737 LIB libspdk_accel.a 00:02:33.737 SO libspdk_accel.so.15.1 00:02:33.737 CC lib/nvme/nvme_zns.o 00:02:33.737 SYMLINK libspdk_accel.so 00:02:33.737 LIB libspdk_init.a 00:02:33.737 CC lib/nvme/nvme_stubs.o 00:02:33.995 CC lib/nvme/nvme_auth.o 00:02:33.995 CC lib/nvme/nvme_cuse.o 00:02:33.995 SO libspdk_init.so.5.0 00:02:33.995 CC lib/nvme/nvme_rdma.o 00:02:33.995 SYMLINK libspdk_init.so 00:02:33.995 LIB libspdk_virtio.a 00:02:33.995 CC lib/bdev/bdev.o 00:02:33.995 SO libspdk_virtio.so.7.0 00:02:33.995 SYMLINK libspdk_virtio.so 00:02:34.253 CC lib/bdev/bdev_rpc.o 00:02:34.253 CC lib/event/app.o 00:02:34.253 CC lib/bdev/bdev_zone.o 00:02:34.253 CC lib/event/reactor.o 00:02:34.511 CC lib/event/log_rpc.o 00:02:34.511 CC lib/event/app_rpc.o 00:02:34.511 CC lib/event/scheduler_static.o 00:02:34.511 CC lib/bdev/part.o 00:02:34.511 CC lib/bdev/scsi_nvme.o 00:02:34.769 LIB libspdk_event.a 00:02:34.769 SO libspdk_event.so.14.0 00:02:35.027 SYMLINK libspdk_event.so 00:02:35.285 LIB libspdk_nvme.a 00:02:35.543 SO libspdk_nvme.so.13.1 00:02:35.801 LIB libspdk_blob.a 00:02:35.801 SYMLINK libspdk_nvme.so 00:02:35.801 SO libspdk_blob.so.11.0 00:02:36.059 SYMLINK libspdk_blob.so 00:02:36.318 CC lib/blobfs/blobfs.o 00:02:36.318 CC lib/blobfs/tree.o 00:02:36.318 CC lib/lvol/lvol.o 00:02:36.576 LIB libspdk_bdev.a 00:02:36.834 SO libspdk_bdev.so.15.1 00:02:36.834 SYMLINK libspdk_bdev.so 00:02:37.126 CC lib/ftl/ftl_core.o 00:02:37.126 CC lib/ftl/ftl_init.o 00:02:37.126 CC lib/ftl/ftl_layout.o 00:02:37.126 CC lib/ftl/ftl_debug.o 00:02:37.126 CC lib/scsi/dev.o 00:02:37.126 CC lib/nvmf/ctrlr.o 00:02:37.126 CC lib/ublk/ublk.o 00:02:37.126 CC lib/nbd/nbd.o 00:02:37.126 LIB libspdk_blobfs.a 00:02:37.126 SO libspdk_blobfs.so.10.0 00:02:37.126 LIB libspdk_lvol.a 00:02:37.126 SO libspdk_lvol.so.10.0 00:02:37.411 SYMLINK libspdk_blobfs.so 00:02:37.411 CC lib/scsi/lun.o 00:02:37.411 SYMLINK libspdk_lvol.so 00:02:37.411 CC lib/scsi/port.o 00:02:37.411 CC lib/ftl/ftl_io.o 00:02:37.411 CC lib/nvmf/ctrlr_discovery.o 00:02:37.411 CC lib/nvmf/ctrlr_bdev.o 00:02:37.411 CC lib/nvmf/subsystem.o 00:02:37.411 CC lib/nvmf/nvmf.o 00:02:37.411 CC lib/nbd/nbd_rpc.o 00:02:37.680 CC lib/nvmf/nvmf_rpc.o 00:02:37.680 CC lib/ftl/ftl_sb.o 00:02:37.680 CC lib/scsi/scsi.o 00:02:37.680 LIB libspdk_nbd.a 00:02:37.680 SO libspdk_nbd.so.7.0 00:02:37.680 CC lib/ublk/ublk_rpc.o 00:02:37.680 CC lib/scsi/scsi_bdev.o 00:02:37.680 CC lib/ftl/ftl_l2p.o 00:02:37.680 SYMLINK libspdk_nbd.so 00:02:37.680 CC lib/ftl/ftl_l2p_flat.o 00:02:37.938 CC lib/nvmf/transport.o 00:02:37.938 LIB libspdk_ublk.a 00:02:37.938 SO libspdk_ublk.so.3.0 00:02:37.938 CC lib/nvmf/tcp.o 00:02:37.938 CC lib/ftl/ftl_nv_cache.o 00:02:37.938 SYMLINK libspdk_ublk.so 00:02:37.938 CC lib/ftl/ftl_band.o 00:02:37.938 CC lib/nvmf/stubs.o 00:02:38.196 CC lib/scsi/scsi_pr.o 00:02:38.454 CC lib/nvmf/mdns_server.o 00:02:38.454 CC lib/nvmf/rdma.o 00:02:38.454 CC lib/ftl/ftl_band_ops.o 00:02:38.454 CC lib/nvmf/auth.o 00:02:38.454 CC lib/scsi/scsi_rpc.o 00:02:38.454 CC lib/scsi/task.o 00:02:38.712 CC lib/ftl/ftl_writer.o 00:02:38.712 CC lib/ftl/ftl_rq.o 00:02:38.712 CC lib/ftl/ftl_reloc.o 00:02:38.712 LIB libspdk_scsi.a 00:02:38.712 CC lib/ftl/ftl_l2p_cache.o 00:02:38.712 CC lib/ftl/ftl_p2l.o 00:02:38.712 SO libspdk_scsi.so.9.0 00:02:38.971 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.971 SYMLINK libspdk_scsi.so 00:02:38.971 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.971 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.971 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:39.229 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:39.229 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:39.229 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:39.229 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:39.229 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:39.486 CC lib/iscsi/conn.o 00:02:39.486 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:39.486 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:39.486 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:39.486 CC lib/vhost/vhost.o 00:02:39.486 CC lib/vhost/vhost_rpc.o 00:02:39.486 CC lib/vhost/vhost_scsi.o 00:02:39.486 CC lib/vhost/vhost_blk.o 00:02:39.486 CC lib/iscsi/init_grp.o 00:02:39.486 CC lib/vhost/rte_vhost_user.o 00:02:39.743 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:39.743 CC lib/ftl/utils/ftl_conf.o 00:02:39.743 CC lib/iscsi/iscsi.o 00:02:40.001 CC lib/iscsi/md5.o 00:02:40.001 CC lib/iscsi/param.o 00:02:40.001 CC lib/ftl/utils/ftl_md.o 00:02:40.259 CC lib/iscsi/portal_grp.o 00:02:40.259 CC lib/ftl/utils/ftl_mempool.o 00:02:40.259 CC lib/iscsi/tgt_node.o 00:02:40.259 CC lib/iscsi/iscsi_subsystem.o 00:02:40.259 CC lib/iscsi/iscsi_rpc.o 00:02:40.518 LIB libspdk_nvmf.a 00:02:40.518 CC lib/ftl/utils/ftl_bitmap.o 00:02:40.518 CC lib/iscsi/task.o 00:02:40.518 CC lib/ftl/utils/ftl_property.o 00:02:40.518 SO libspdk_nvmf.so.18.1 00:02:40.776 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:40.776 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:40.776 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:40.776 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:40.776 LIB libspdk_vhost.a 00:02:40.776 SYMLINK libspdk_nvmf.so 00:02:40.776 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:40.776 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:40.776 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:40.776 SO libspdk_vhost.so.8.0 00:02:40.776 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:40.776 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:40.777 SYMLINK libspdk_vhost.so 00:02:40.777 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:40.777 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:40.777 CC lib/ftl/base/ftl_base_dev.o 00:02:41.035 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.035 CC lib/ftl/ftl_trace.o 00:02:41.293 LIB libspdk_ftl.a 00:02:41.293 LIB libspdk_iscsi.a 00:02:41.293 SO libspdk_iscsi.so.8.0 00:02:41.552 SO libspdk_ftl.so.9.0 00:02:41.552 SYMLINK libspdk_iscsi.so 00:02:41.811 SYMLINK libspdk_ftl.so 00:02:42.069 CC module/env_dpdk/env_dpdk_rpc.o 00:02:42.328 CC module/sock/uring/uring.o 00:02:42.328 CC module/sock/posix/posix.o 00:02:42.328 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:42.328 CC module/keyring/linux/keyring.o 00:02:42.328 CC module/accel/error/accel_error.o 00:02:42.328 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:42.328 CC module/blob/bdev/blob_bdev.o 00:02:42.328 CC module/keyring/file/keyring.o 00:02:42.328 CC module/scheduler/gscheduler/gscheduler.o 00:02:42.328 LIB libspdk_env_dpdk_rpc.a 00:02:42.328 SO libspdk_env_dpdk_rpc.so.6.0 00:02:42.328 SYMLINK libspdk_env_dpdk_rpc.so 00:02:42.328 CC module/keyring/linux/keyring_rpc.o 00:02:42.328 CC module/keyring/file/keyring_rpc.o 00:02:42.328 LIB libspdk_scheduler_gscheduler.a 00:02:42.328 LIB libspdk_scheduler_dpdk_governor.a 00:02:42.587 CC module/accel/error/accel_error_rpc.o 00:02:42.587 SO libspdk_scheduler_gscheduler.so.4.0 00:02:42.587 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:42.587 LIB libspdk_scheduler_dynamic.a 00:02:42.587 SO libspdk_scheduler_dynamic.so.4.0 00:02:42.587 SYMLINK libspdk_scheduler_gscheduler.so 00:02:42.587 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:42.587 LIB libspdk_keyring_linux.a 00:02:42.587 LIB libspdk_blob_bdev.a 00:02:42.587 LIB libspdk_keyring_file.a 00:02:42.587 SO libspdk_keyring_linux.so.1.0 00:02:42.587 SO libspdk_blob_bdev.so.11.0 00:02:42.587 CC module/accel/ioat/accel_ioat.o 00:02:42.587 SYMLINK libspdk_scheduler_dynamic.so 00:02:42.587 SO libspdk_keyring_file.so.1.0 00:02:42.587 LIB libspdk_accel_error.a 00:02:42.587 CC module/accel/ioat/accel_ioat_rpc.o 00:02:42.587 SYMLINK libspdk_blob_bdev.so 00:02:42.587 SO libspdk_accel_error.so.2.0 00:02:42.587 SYMLINK libspdk_keyring_linux.so 00:02:42.587 SYMLINK libspdk_keyring_file.so 00:02:42.587 SYMLINK libspdk_accel_error.so 00:02:42.845 CC module/accel/iaa/accel_iaa.o 00:02:42.845 CC module/accel/iaa/accel_iaa_rpc.o 00:02:42.845 CC module/accel/dsa/accel_dsa.o 00:02:42.845 CC module/accel/dsa/accel_dsa_rpc.o 00:02:42.845 LIB libspdk_accel_ioat.a 00:02:42.845 SO libspdk_accel_ioat.so.6.0 00:02:42.845 SYMLINK libspdk_accel_ioat.so 00:02:42.845 LIB libspdk_accel_iaa.a 00:02:42.845 LIB libspdk_sock_uring.a 00:02:43.102 CC module/blobfs/bdev/blobfs_bdev.o 00:02:43.102 CC module/bdev/error/vbdev_error.o 00:02:43.102 CC module/bdev/delay/vbdev_delay.o 00:02:43.102 SO libspdk_accel_iaa.so.3.0 00:02:43.102 SO libspdk_sock_uring.so.5.0 00:02:43.102 LIB libspdk_sock_posix.a 00:02:43.102 LIB libspdk_accel_dsa.a 00:02:43.102 SO libspdk_sock_posix.so.6.0 00:02:43.102 CC module/bdev/gpt/gpt.o 00:02:43.102 SO libspdk_accel_dsa.so.5.0 00:02:43.102 SYMLINK libspdk_sock_uring.so 00:02:43.102 SYMLINK libspdk_accel_iaa.so 00:02:43.102 CC module/bdev/error/vbdev_error_rpc.o 00:02:43.102 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:43.102 CC module/bdev/malloc/bdev_malloc.o 00:02:43.102 CC module/bdev/lvol/vbdev_lvol.o 00:02:43.102 SYMLINK libspdk_sock_posix.so 00:02:43.102 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:43.102 SYMLINK libspdk_accel_dsa.so 00:02:43.102 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:43.359 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:43.359 CC module/bdev/gpt/vbdev_gpt.o 00:02:43.359 LIB libspdk_bdev_error.a 00:02:43.359 LIB libspdk_blobfs_bdev.a 00:02:43.359 SO libspdk_blobfs_bdev.so.6.0 00:02:43.359 SO libspdk_bdev_error.so.6.0 00:02:43.359 CC module/bdev/null/bdev_null.o 00:02:43.359 SYMLINK libspdk_blobfs_bdev.so 00:02:43.359 SYMLINK libspdk_bdev_error.so 00:02:43.359 CC module/bdev/null/bdev_null_rpc.o 00:02:43.359 LIB libspdk_bdev_delay.a 00:02:43.359 LIB libspdk_bdev_malloc.a 00:02:43.359 SO libspdk_bdev_delay.so.6.0 00:02:43.617 SO libspdk_bdev_malloc.so.6.0 00:02:43.617 CC module/bdev/nvme/bdev_nvme.o 00:02:43.617 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:43.617 SYMLINK libspdk_bdev_delay.so 00:02:43.617 LIB libspdk_bdev_gpt.a 00:02:43.617 CC module/bdev/raid/bdev_raid.o 00:02:43.617 CC module/bdev/passthru/vbdev_passthru.o 00:02:43.617 SYMLINK libspdk_bdev_malloc.so 00:02:43.617 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:43.617 SO libspdk_bdev_gpt.so.6.0 00:02:43.617 LIB libspdk_bdev_null.a 00:02:43.617 LIB libspdk_bdev_lvol.a 00:02:43.617 SO libspdk_bdev_null.so.6.0 00:02:43.617 SO libspdk_bdev_lvol.so.6.0 00:02:43.617 SYMLINK libspdk_bdev_gpt.so 00:02:43.617 SYMLINK libspdk_bdev_null.so 00:02:43.617 CC module/bdev/nvme/nvme_rpc.o 00:02:43.617 CC module/bdev/split/vbdev_split.o 00:02:43.617 SYMLINK libspdk_bdev_lvol.so 00:02:43.617 CC module/bdev/nvme/bdev_mdns_client.o 00:02:43.874 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:43.874 CC module/bdev/uring/bdev_uring.o 00:02:43.874 LIB libspdk_bdev_passthru.a 00:02:43.874 SO libspdk_bdev_passthru.so.6.0 00:02:43.874 CC module/bdev/raid/bdev_raid_rpc.o 00:02:43.874 CC module/bdev/aio/bdev_aio.o 00:02:43.874 CC module/bdev/raid/bdev_raid_sb.o 00:02:43.874 CC module/bdev/split/vbdev_split_rpc.o 00:02:43.874 SYMLINK libspdk_bdev_passthru.so 00:02:43.874 CC module/bdev/uring/bdev_uring_rpc.o 00:02:44.130 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.130 LIB libspdk_bdev_split.a 00:02:44.130 CC module/bdev/raid/raid0.o 00:02:44.130 SO libspdk_bdev_split.so.6.0 00:02:44.130 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.130 SYMLINK libspdk_bdev_split.so 00:02:44.130 CC module/bdev/nvme/vbdev_opal.o 00:02:44.388 LIB libspdk_bdev_zone_block.a 00:02:44.388 LIB libspdk_bdev_uring.a 00:02:44.388 SO libspdk_bdev_zone_block.so.6.0 00:02:44.388 SO libspdk_bdev_uring.so.6.0 00:02:44.388 CC module/bdev/ftl/bdev_ftl.o 00:02:44.388 SYMLINK libspdk_bdev_zone_block.so 00:02:44.388 LIB libspdk_bdev_aio.a 00:02:44.388 CC module/bdev/raid/raid1.o 00:02:44.388 SO libspdk_bdev_aio.so.6.0 00:02:44.388 SYMLINK libspdk_bdev_uring.so 00:02:44.388 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.388 CC module/bdev/raid/concat.o 00:02:44.388 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.388 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.388 SYMLINK libspdk_bdev_aio.so 00:02:44.388 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.646 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.646 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.646 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.646 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.646 LIB libspdk_bdev_ftl.a 00:02:44.646 LIB libspdk_bdev_raid.a 00:02:44.646 SO libspdk_bdev_ftl.so.6.0 00:02:44.902 SO libspdk_bdev_raid.so.6.0 00:02:44.902 SYMLINK libspdk_bdev_ftl.so 00:02:44.902 LIB libspdk_bdev_iscsi.a 00:02:44.902 SYMLINK libspdk_bdev_raid.so 00:02:44.902 SO libspdk_bdev_iscsi.so.6.0 00:02:44.902 SYMLINK libspdk_bdev_iscsi.so 00:02:45.160 LIB libspdk_bdev_virtio.a 00:02:45.160 SO libspdk_bdev_virtio.so.6.0 00:02:45.160 SYMLINK libspdk_bdev_virtio.so 00:02:46.094 LIB libspdk_bdev_nvme.a 00:02:46.094 SO libspdk_bdev_nvme.so.7.0 00:02:46.094 SYMLINK libspdk_bdev_nvme.so 00:02:46.661 CC module/event/subsystems/vmd/vmd.o 00:02:46.661 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:46.661 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:46.661 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.661 CC module/event/subsystems/sock/sock.o 00:02:46.661 CC module/event/subsystems/keyring/keyring.o 00:02:46.661 CC module/event/subsystems/iobuf/iobuf.o 00:02:46.661 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:46.661 LIB libspdk_event_scheduler.a 00:02:46.661 LIB libspdk_event_keyring.a 00:02:46.661 LIB libspdk_event_vhost_blk.a 00:02:46.661 LIB libspdk_event_vmd.a 00:02:46.661 LIB libspdk_event_sock.a 00:02:46.661 LIB libspdk_event_iobuf.a 00:02:46.661 SO libspdk_event_vhost_blk.so.3.0 00:02:46.661 SO libspdk_event_scheduler.so.4.0 00:02:46.661 SO libspdk_event_keyring.so.1.0 00:02:46.661 SO libspdk_event_sock.so.5.0 00:02:46.661 SO libspdk_event_vmd.so.6.0 00:02:46.661 SO libspdk_event_iobuf.so.3.0 00:02:46.920 SYMLINK libspdk_event_vhost_blk.so 00:02:46.920 SYMLINK libspdk_event_keyring.so 00:02:46.920 SYMLINK libspdk_event_sock.so 00:02:46.920 SYMLINK libspdk_event_scheduler.so 00:02:46.920 SYMLINK libspdk_event_vmd.so 00:02:46.920 SYMLINK libspdk_event_iobuf.so 00:02:47.179 CC module/event/subsystems/accel/accel.o 00:02:47.179 LIB libspdk_event_accel.a 00:02:47.438 SO libspdk_event_accel.so.6.0 00:02:47.438 SYMLINK libspdk_event_accel.so 00:02:47.697 CC module/event/subsystems/bdev/bdev.o 00:02:47.955 LIB libspdk_event_bdev.a 00:02:47.955 SO libspdk_event_bdev.so.6.0 00:02:47.955 SYMLINK libspdk_event_bdev.so 00:02:48.214 CC module/event/subsystems/ublk/ublk.o 00:02:48.214 CC module/event/subsystems/scsi/scsi.o 00:02:48.214 CC module/event/subsystems/nbd/nbd.o 00:02:48.214 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:48.214 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:48.214 LIB libspdk_event_ublk.a 00:02:48.473 LIB libspdk_event_nbd.a 00:02:48.473 LIB libspdk_event_scsi.a 00:02:48.473 SO libspdk_event_ublk.so.3.0 00:02:48.473 SO libspdk_event_nbd.so.6.0 00:02:48.473 SO libspdk_event_scsi.so.6.0 00:02:48.473 SYMLINK libspdk_event_ublk.so 00:02:48.473 SYMLINK libspdk_event_nbd.so 00:02:48.473 SYMLINK libspdk_event_scsi.so 00:02:48.473 LIB libspdk_event_nvmf.a 00:02:48.473 SO libspdk_event_nvmf.so.6.0 00:02:48.732 SYMLINK libspdk_event_nvmf.so 00:02:48.732 CC module/event/subsystems/iscsi/iscsi.o 00:02:48.732 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.991 LIB libspdk_event_vhost_scsi.a 00:02:48.991 LIB libspdk_event_iscsi.a 00:02:48.991 SO libspdk_event_vhost_scsi.so.3.0 00:02:48.991 SO libspdk_event_iscsi.so.6.0 00:02:48.991 SYMLINK libspdk_event_vhost_scsi.so 00:02:48.991 SYMLINK libspdk_event_iscsi.so 00:02:49.250 SO libspdk.so.6.0 00:02:49.250 SYMLINK libspdk.so 00:02:49.507 CC app/trace_record/trace_record.o 00:02:49.507 CXX app/trace/trace.o 00:02:49.507 CC app/spdk_lspci/spdk_lspci.o 00:02:49.507 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.507 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:49.507 CC app/nvmf_tgt/nvmf_main.o 00:02:49.507 CC app/spdk_tgt/spdk_tgt.o 00:02:49.507 CC examples/util/zipf/zipf.o 00:02:49.507 CC examples/ioat/perf/perf.o 00:02:49.507 CC test/thread/poller_perf/poller_perf.o 00:02:49.507 LINK spdk_lspci 00:02:49.764 LINK nvmf_tgt 00:02:49.764 LINK interrupt_tgt 00:02:49.764 LINK zipf 00:02:49.764 LINK spdk_trace_record 00:02:49.764 LINK poller_perf 00:02:49.764 LINK iscsi_tgt 00:02:49.764 LINK ioat_perf 00:02:49.764 LINK spdk_tgt 00:02:49.764 CC app/spdk_nvme_perf/perf.o 00:02:49.764 LINK spdk_trace 00:02:50.020 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.021 CC app/spdk_nvme_identify/identify.o 00:02:50.021 CC app/spdk_top/spdk_top.o 00:02:50.021 CC examples/ioat/verify/verify.o 00:02:50.021 CC app/spdk_dd/spdk_dd.o 00:02:50.021 CC test/dma/test_dma/test_dma.o 00:02:50.278 TEST_HEADER include/spdk/accel.h 00:02:50.278 TEST_HEADER include/spdk/accel_module.h 00:02:50.278 TEST_HEADER include/spdk/assert.h 00:02:50.278 TEST_HEADER include/spdk/barrier.h 00:02:50.278 TEST_HEADER include/spdk/base64.h 00:02:50.278 TEST_HEADER include/spdk/bdev.h 00:02:50.278 TEST_HEADER include/spdk/bdev_module.h 00:02:50.278 CC test/app/bdev_svc/bdev_svc.o 00:02:50.278 TEST_HEADER include/spdk/bdev_zone.h 00:02:50.278 TEST_HEADER include/spdk/bit_array.h 00:02:50.278 TEST_HEADER include/spdk/bit_pool.h 00:02:50.278 TEST_HEADER include/spdk/blob_bdev.h 00:02:50.278 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:50.278 TEST_HEADER include/spdk/blobfs.h 00:02:50.278 TEST_HEADER include/spdk/blob.h 00:02:50.278 LINK spdk_nvme_discover 00:02:50.278 TEST_HEADER include/spdk/conf.h 00:02:50.278 TEST_HEADER include/spdk/config.h 00:02:50.278 CC app/fio/nvme/fio_plugin.o 00:02:50.278 TEST_HEADER include/spdk/cpuset.h 00:02:50.278 TEST_HEADER include/spdk/crc16.h 00:02:50.278 TEST_HEADER include/spdk/crc32.h 00:02:50.278 TEST_HEADER include/spdk/crc64.h 00:02:50.278 TEST_HEADER include/spdk/dif.h 00:02:50.278 TEST_HEADER include/spdk/dma.h 00:02:50.278 TEST_HEADER include/spdk/endian.h 00:02:50.278 TEST_HEADER include/spdk/env_dpdk.h 00:02:50.278 TEST_HEADER include/spdk/env.h 00:02:50.278 TEST_HEADER include/spdk/event.h 00:02:50.278 TEST_HEADER include/spdk/fd_group.h 00:02:50.278 TEST_HEADER include/spdk/fd.h 00:02:50.278 TEST_HEADER include/spdk/file.h 00:02:50.278 TEST_HEADER include/spdk/ftl.h 00:02:50.278 TEST_HEADER include/spdk/gpt_spec.h 00:02:50.278 TEST_HEADER include/spdk/hexlify.h 00:02:50.278 TEST_HEADER include/spdk/histogram_data.h 00:02:50.278 LINK verify 00:02:50.278 TEST_HEADER include/spdk/idxd.h 00:02:50.278 TEST_HEADER include/spdk/idxd_spec.h 00:02:50.278 TEST_HEADER include/spdk/init.h 00:02:50.278 TEST_HEADER include/spdk/ioat.h 00:02:50.278 TEST_HEADER include/spdk/ioat_spec.h 00:02:50.278 TEST_HEADER include/spdk/iscsi_spec.h 00:02:50.278 TEST_HEADER include/spdk/json.h 00:02:50.278 TEST_HEADER include/spdk/jsonrpc.h 00:02:50.278 TEST_HEADER include/spdk/keyring.h 00:02:50.278 TEST_HEADER include/spdk/keyring_module.h 00:02:50.278 TEST_HEADER include/spdk/likely.h 00:02:50.278 TEST_HEADER include/spdk/log.h 00:02:50.278 TEST_HEADER include/spdk/lvol.h 00:02:50.278 TEST_HEADER include/spdk/memory.h 00:02:50.278 TEST_HEADER include/spdk/mmio.h 00:02:50.278 TEST_HEADER include/spdk/nbd.h 00:02:50.278 TEST_HEADER include/spdk/notify.h 00:02:50.278 TEST_HEADER include/spdk/nvme.h 00:02:50.278 TEST_HEADER include/spdk/nvme_intel.h 00:02:50.278 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:50.278 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:50.278 TEST_HEADER include/spdk/nvme_spec.h 00:02:50.278 TEST_HEADER include/spdk/nvme_zns.h 00:02:50.278 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:50.278 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:50.278 TEST_HEADER include/spdk/nvmf.h 00:02:50.278 TEST_HEADER include/spdk/nvmf_spec.h 00:02:50.278 TEST_HEADER include/spdk/nvmf_transport.h 00:02:50.278 TEST_HEADER include/spdk/opal.h 00:02:50.278 TEST_HEADER include/spdk/opal_spec.h 00:02:50.278 TEST_HEADER include/spdk/pci_ids.h 00:02:50.278 TEST_HEADER include/spdk/pipe.h 00:02:50.278 TEST_HEADER include/spdk/queue.h 00:02:50.537 TEST_HEADER include/spdk/reduce.h 00:02:50.537 TEST_HEADER include/spdk/rpc.h 00:02:50.537 TEST_HEADER include/spdk/scheduler.h 00:02:50.537 TEST_HEADER include/spdk/scsi.h 00:02:50.537 TEST_HEADER include/spdk/scsi_spec.h 00:02:50.537 TEST_HEADER include/spdk/sock.h 00:02:50.537 TEST_HEADER include/spdk/stdinc.h 00:02:50.537 LINK bdev_svc 00:02:50.537 TEST_HEADER include/spdk/string.h 00:02:50.537 TEST_HEADER include/spdk/thread.h 00:02:50.537 TEST_HEADER include/spdk/trace.h 00:02:50.537 TEST_HEADER include/spdk/trace_parser.h 00:02:50.537 TEST_HEADER include/spdk/tree.h 00:02:50.537 TEST_HEADER include/spdk/ublk.h 00:02:50.537 TEST_HEADER include/spdk/util.h 00:02:50.537 TEST_HEADER include/spdk/uuid.h 00:02:50.537 TEST_HEADER include/spdk/version.h 00:02:50.537 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:50.537 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:50.537 TEST_HEADER include/spdk/vhost.h 00:02:50.537 TEST_HEADER include/spdk/vmd.h 00:02:50.537 TEST_HEADER include/spdk/xor.h 00:02:50.537 TEST_HEADER include/spdk/zipf.h 00:02:50.537 CXX test/cpp_headers/accel.o 00:02:50.537 LINK test_dma 00:02:50.537 CC app/vhost/vhost.o 00:02:50.537 LINK spdk_dd 00:02:50.795 CXX test/cpp_headers/accel_module.o 00:02:50.795 CC examples/thread/thread/thread_ex.o 00:02:50.795 LINK spdk_nvme_perf 00:02:50.795 LINK vhost 00:02:50.795 LINK spdk_nvme 00:02:50.795 LINK spdk_nvme_identify 00:02:50.795 CC test/app/histogram_perf/histogram_perf.o 00:02:50.795 CXX test/cpp_headers/assert.o 00:02:50.795 LINK spdk_top 00:02:50.795 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.795 CC test/app/jsoncat/jsoncat.o 00:02:51.055 CXX test/cpp_headers/barrier.o 00:02:51.055 LINK thread 00:02:51.055 LINK histogram_perf 00:02:51.055 CXX test/cpp_headers/base64.o 00:02:51.055 CXX test/cpp_headers/bdev.o 00:02:51.055 LINK jsoncat 00:02:51.055 CC app/fio/bdev/fio_plugin.o 00:02:51.055 CXX test/cpp_headers/bdev_module.o 00:02:51.055 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.055 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.317 CXX test/cpp_headers/bdev_zone.o 00:02:51.317 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.317 LINK nvme_fuzz 00:02:51.317 CC test/env/vtophys/vtophys.o 00:02:51.317 CC examples/sock/hello_world/hello_sock.o 00:02:51.317 CXX test/cpp_headers/bit_array.o 00:02:51.317 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.317 CC examples/vmd/led/led.o 00:02:51.575 CC test/env/mem_callbacks/mem_callbacks.o 00:02:51.575 CXX test/cpp_headers/bit_pool.o 00:02:51.575 LINK vtophys 00:02:51.575 LINK lsvmd 00:02:51.575 LINK spdk_bdev 00:02:51.575 LINK led 00:02:51.575 LINK hello_sock 00:02:51.575 CXX test/cpp_headers/blob_bdev.o 00:02:51.575 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.575 LINK vhost_fuzz 00:02:51.575 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.575 CXX test/cpp_headers/blobfs.o 00:02:51.833 CC test/env/memory/memory_ut.o 00:02:51.833 LINK env_dpdk_post_init 00:02:51.833 CC test/env/pci/pci_ut.o 00:02:51.833 CXX test/cpp_headers/blob.o 00:02:52.092 CC test/app/stub/stub.o 00:02:52.092 CC examples/idxd/perf/perf.o 00:02:52.092 CC examples/accel/perf/accel_perf.o 00:02:52.092 CC test/event/event_perf/event_perf.o 00:02:52.092 LINK mem_callbacks 00:02:52.092 CXX test/cpp_headers/conf.o 00:02:52.092 CC test/event/reactor/reactor.o 00:02:52.092 LINK stub 00:02:52.092 CXX test/cpp_headers/config.o 00:02:52.092 LINK event_perf 00:02:52.350 CXX test/cpp_headers/cpuset.o 00:02:52.350 LINK pci_ut 00:02:52.350 LINK reactor 00:02:52.350 LINK idxd_perf 00:02:52.350 CXX test/cpp_headers/crc16.o 00:02:52.350 CC examples/blob/hello_world/hello_blob.o 00:02:52.608 CC examples/blob/cli/blobcli.o 00:02:52.608 LINK accel_perf 00:02:52.608 CC test/event/reactor_perf/reactor_perf.o 00:02:52.608 CC examples/nvme/hello_world/hello_world.o 00:02:52.608 CXX test/cpp_headers/crc32.o 00:02:52.608 CC test/event/app_repeat/app_repeat.o 00:02:52.608 CC test/event/scheduler/scheduler.o 00:02:52.608 CXX test/cpp_headers/crc64.o 00:02:52.608 LINK reactor_perf 00:02:52.608 LINK hello_blob 00:02:52.866 LINK app_repeat 00:02:52.866 LINK iscsi_fuzz 00:02:52.866 LINK hello_world 00:02:52.866 CXX test/cpp_headers/dif.o 00:02:52.866 CXX test/cpp_headers/dma.o 00:02:52.866 LINK scheduler 00:02:53.124 LINK blobcli 00:02:53.124 LINK memory_ut 00:02:53.124 CC examples/bdev/hello_world/hello_bdev.o 00:02:53.124 CXX test/cpp_headers/endian.o 00:02:53.124 CC examples/nvme/reconnect/reconnect.o 00:02:53.124 CXX test/cpp_headers/env_dpdk.o 00:02:53.124 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:53.124 CC examples/bdev/bdevperf/bdevperf.o 00:02:53.124 CC test/rpc_client/rpc_client_test.o 00:02:53.124 CXX test/cpp_headers/env.o 00:02:53.382 CC test/nvme/aer/aer.o 00:02:53.382 LINK hello_bdev 00:02:53.382 CC test/nvme/reset/reset.o 00:02:53.382 CC test/nvme/sgl/sgl.o 00:02:53.382 LINK reconnect 00:02:53.382 CXX test/cpp_headers/event.o 00:02:53.382 CC test/accel/dif/dif.o 00:02:53.382 LINK rpc_client_test 00:02:53.642 CXX test/cpp_headers/fd_group.o 00:02:53.642 LINK nvme_manage 00:02:53.642 LINK aer 00:02:53.642 LINK reset 00:02:53.642 LINK sgl 00:02:53.642 CC examples/nvme/hotplug/hotplug.o 00:02:53.642 CC examples/nvme/arbitration/arbitration.o 00:02:53.642 CXX test/cpp_headers/fd.o 00:02:53.642 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:53.900 CC examples/nvme/abort/abort.o 00:02:53.900 CC test/nvme/e2edp/nvme_dp.o 00:02:53.900 CXX test/cpp_headers/file.o 00:02:53.900 CC test/nvme/err_injection/err_injection.o 00:02:53.900 CC test/nvme/overhead/overhead.o 00:02:53.900 LINK bdevperf 00:02:53.900 LINK dif 00:02:53.900 LINK cmb_copy 00:02:53.900 LINK hotplug 00:02:54.157 LINK arbitration 00:02:54.157 LINK err_injection 00:02:54.157 CXX test/cpp_headers/ftl.o 00:02:54.157 LINK nvme_dp 00:02:54.157 LINK overhead 00:02:54.157 CXX test/cpp_headers/gpt_spec.o 00:02:54.157 CC test/nvme/startup/startup.o 00:02:54.157 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:54.157 LINK abort 00:02:54.415 CXX test/cpp_headers/hexlify.o 00:02:54.415 CC test/blobfs/mkfs/mkfs.o 00:02:54.415 LINK startup 00:02:54.415 CC test/nvme/reserve/reserve.o 00:02:54.415 CC test/nvme/simple_copy/simple_copy.o 00:02:54.415 LINK pmr_persistence 00:02:54.415 CC test/nvme/connect_stress/connect_stress.o 00:02:54.415 CC test/bdev/bdevio/bdevio.o 00:02:54.415 CXX test/cpp_headers/histogram_data.o 00:02:54.415 CC test/lvol/esnap/esnap.o 00:02:54.673 CC test/nvme/boot_partition/boot_partition.o 00:02:54.673 LINK mkfs 00:02:54.673 LINK reserve 00:02:54.673 LINK connect_stress 00:02:54.673 LINK simple_copy 00:02:54.673 CC test/nvme/compliance/nvme_compliance.o 00:02:54.673 CXX test/cpp_headers/idxd.o 00:02:54.673 LINK boot_partition 00:02:54.931 CXX test/cpp_headers/idxd_spec.o 00:02:54.931 CXX test/cpp_headers/init.o 00:02:54.931 CC examples/nvmf/nvmf/nvmf.o 00:02:54.931 CXX test/cpp_headers/ioat.o 00:02:54.931 LINK bdevio 00:02:54.931 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.931 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.931 CC test/nvme/fdp/fdp.o 00:02:54.931 CXX test/cpp_headers/ioat_spec.o 00:02:54.931 LINK nvme_compliance 00:02:55.189 CXX test/cpp_headers/iscsi_spec.o 00:02:55.189 CC test/nvme/cuse/cuse.o 00:02:55.189 CXX test/cpp_headers/json.o 00:02:55.189 LINK fused_ordering 00:02:55.189 LINK doorbell_aers 00:02:55.189 LINK nvmf 00:02:55.189 CXX test/cpp_headers/jsonrpc.o 00:02:55.189 CXX test/cpp_headers/keyring.o 00:02:55.189 CXX test/cpp_headers/keyring_module.o 00:02:55.447 CXX test/cpp_headers/likely.o 00:02:55.447 LINK fdp 00:02:55.447 CXX test/cpp_headers/log.o 00:02:55.447 CXX test/cpp_headers/lvol.o 00:02:55.447 CXX test/cpp_headers/memory.o 00:02:55.447 CXX test/cpp_headers/mmio.o 00:02:55.447 CXX test/cpp_headers/nbd.o 00:02:55.447 CXX test/cpp_headers/notify.o 00:02:55.447 CXX test/cpp_headers/nvme.o 00:02:55.447 CXX test/cpp_headers/nvme_intel.o 00:02:55.447 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.447 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.447 CXX test/cpp_headers/nvme_spec.o 00:02:55.447 CXX test/cpp_headers/nvme_zns.o 00:02:55.705 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.705 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.705 CXX test/cpp_headers/nvmf.o 00:02:55.705 CXX test/cpp_headers/nvmf_spec.o 00:02:55.705 CXX test/cpp_headers/nvmf_transport.o 00:02:55.705 CXX test/cpp_headers/opal.o 00:02:55.705 CXX test/cpp_headers/opal_spec.o 00:02:55.705 CXX test/cpp_headers/pci_ids.o 00:02:55.705 CXX test/cpp_headers/pipe.o 00:02:55.705 CXX test/cpp_headers/queue.o 00:02:55.963 CXX test/cpp_headers/reduce.o 00:02:55.963 CXX test/cpp_headers/rpc.o 00:02:55.963 CXX test/cpp_headers/scheduler.o 00:02:55.963 CXX test/cpp_headers/scsi.o 00:02:55.963 CXX test/cpp_headers/scsi_spec.o 00:02:55.963 CXX test/cpp_headers/sock.o 00:02:55.963 CXX test/cpp_headers/stdinc.o 00:02:55.963 CXX test/cpp_headers/string.o 00:02:55.963 CXX test/cpp_headers/thread.o 00:02:55.963 CXX test/cpp_headers/trace.o 00:02:55.963 CXX test/cpp_headers/trace_parser.o 00:02:56.221 CXX test/cpp_headers/tree.o 00:02:56.221 CXX test/cpp_headers/ublk.o 00:02:56.221 CXX test/cpp_headers/util.o 00:02:56.221 CXX test/cpp_headers/uuid.o 00:02:56.221 CXX test/cpp_headers/version.o 00:02:56.221 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.221 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.221 CXX test/cpp_headers/vhost.o 00:02:56.221 CXX test/cpp_headers/vmd.o 00:02:56.221 CXX test/cpp_headers/xor.o 00:02:56.221 CXX test/cpp_headers/zipf.o 00:02:56.480 LINK cuse 00:02:59.772 LINK esnap 00:03:00.030 00:03:00.030 real 1m5.076s 00:03:00.030 user 6m35.112s 00:03:00.030 sys 1m27.407s 00:03:00.030 16:06:43 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:00.030 ************************************ 00:03:00.030 END TEST make 00:03:00.030 ************************************ 00:03:00.030 16:06:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:00.030 16:06:43 -- common/autotest_common.sh@1142 -- $ return 0 00:03:00.030 16:06:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:00.030 16:06:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:00.030 16:06:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:00.030 16:06:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.030 16:06:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:00.030 16:06:43 -- pm/common@44 -- $ pid=5183 00:03:00.030 16:06:43 -- pm/common@50 -- $ kill -TERM 5183 00:03:00.030 16:06:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.030 16:06:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:00.030 16:06:43 -- pm/common@44 -- $ pid=5185 00:03:00.030 16:06:43 -- pm/common@50 -- $ kill -TERM 5185 00:03:00.289 16:06:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:00.289 16:06:43 -- nvmf/common.sh@7 -- # uname -s 00:03:00.289 16:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.289 16:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.289 16:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.289 16:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.289 16:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.289 16:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.289 16:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.289 16:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.289 16:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.289 16:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.289 16:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:03:00.289 16:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:03:00.289 16:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.289 16:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.289 16:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:00.289 16:06:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.289 16:06:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:00.289 16:06:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.289 16:06:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.289 16:06:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.289 16:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.289 16:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.289 16:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.289 16:06:43 -- paths/export.sh@5 -- # export PATH 00:03:00.289 16:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.289 16:06:43 -- nvmf/common.sh@47 -- # : 0 00:03:00.289 16:06:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:00.289 16:06:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:00.289 16:06:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.289 16:06:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.289 16:06:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.289 16:06:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:00.289 16:06:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:00.289 16:06:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:00.289 16:06:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:00.289 16:06:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:00.289 16:06:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:00.289 16:06:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:00.289 16:06:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:00.289 16:06:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:00.289 16:06:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:00.289 16:06:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:00.289 16:06:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:00.289 16:06:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:00.289 16:06:43 -- spdk/autotest.sh@48 -- # udevadm_pid=52829 00:03:00.289 16:06:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:00.289 16:06:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:00.289 16:06:43 -- pm/common@17 -- # local monitor 00:03:00.289 16:06:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.289 16:06:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.289 16:06:43 -- pm/common@25 -- # sleep 1 00:03:00.289 16:06:43 -- pm/common@21 -- # date +%s 00:03:00.289 16:06:43 -- pm/common@21 -- # date +%s 00:03:00.290 16:06:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720800403 00:03:00.290 16:06:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720800403 00:03:00.290 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720800403_collect-cpu-load.pm.log 00:03:00.290 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720800403_collect-vmstat.pm.log 00:03:01.225 16:06:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:01.225 16:06:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:01.225 16:06:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:01.225 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:03:01.225 16:06:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:01.225 16:06:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:01.225 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:03:01.225 16:06:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:01.225 16:06:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:01.225 16:06:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:01.225 16:06:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:01.225 16:06:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:01.225 16:06:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:01.225 16:06:44 -- common/autotest_common.sh@1455 -- # uname 00:03:01.225 16:06:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:01.225 16:06:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:01.225 16:06:44 -- common/autotest_common.sh@1475 -- # uname 00:03:01.225 16:06:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:01.225 16:06:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:01.225 16:06:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:01.225 16:06:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:01.225 16:06:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:01.225 16:06:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:01.225 --rc lcov_branch_coverage=1 00:03:01.225 --rc lcov_function_coverage=1 00:03:01.225 --rc genhtml_branch_coverage=1 00:03:01.225 --rc genhtml_function_coverage=1 00:03:01.225 --rc genhtml_legend=1 00:03:01.225 --rc geninfo_all_blocks=1 00:03:01.225 ' 00:03:01.225 16:06:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:01.225 --rc lcov_branch_coverage=1 00:03:01.225 --rc lcov_function_coverage=1 00:03:01.225 --rc genhtml_branch_coverage=1 00:03:01.225 --rc genhtml_function_coverage=1 00:03:01.225 --rc genhtml_legend=1 00:03:01.225 --rc geninfo_all_blocks=1 00:03:01.225 ' 00:03:01.225 16:06:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:01.225 --rc lcov_branch_coverage=1 00:03:01.225 --rc lcov_function_coverage=1 00:03:01.225 --rc genhtml_branch_coverage=1 00:03:01.225 --rc genhtml_function_coverage=1 00:03:01.225 --rc genhtml_legend=1 00:03:01.225 --rc geninfo_all_blocks=1 00:03:01.225 --no-external' 00:03:01.225 16:06:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:01.225 --rc lcov_branch_coverage=1 00:03:01.225 --rc lcov_function_coverage=1 00:03:01.225 --rc genhtml_branch_coverage=1 00:03:01.225 --rc genhtml_function_coverage=1 00:03:01.225 --rc genhtml_legend=1 00:03:01.225 --rc geninfo_all_blocks=1 00:03:01.225 --no-external' 00:03:01.225 16:06:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:01.483 lcov: LCOV version 1.14 00:03:01.483 16:06:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:16.360 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:16.360 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:28.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:28.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:28.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:28.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:28.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:28.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:31.850 16:07:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:31.850 16:07:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:31.850 16:07:15 -- common/autotest_common.sh@10 -- # set +x 00:03:31.850 16:07:15 -- spdk/autotest.sh@91 -- # rm -f 00:03:31.850 16:07:15 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.417 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:32.417 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:32.417 16:07:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:32.417 16:07:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:32.417 16:07:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:32.417 16:07:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:32.417 16:07:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:32.417 16:07:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:32.417 16:07:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:32.417 16:07:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.417 16:07:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:32.417 16:07:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:32.417 16:07:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:32.417 16:07:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:32.417 16:07:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:32.417 16:07:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:32.417 16:07:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:32.417 16:07:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:32.417 16:07:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:32.417 16:07:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:32.417 16:07:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:32.417 16:07:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:32.417 16:07:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:32.417 16:07:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:32.418 16:07:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:32.418 16:07:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:32.418 16:07:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:32.418 16:07:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:32.418 16:07:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:32.418 16:07:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:32.418 16:07:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:32.418 16:07:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:32.676 No valid GPT data, bailing 00:03:32.676 16:07:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.676 16:07:16 -- scripts/common.sh@391 -- # pt= 00:03:32.676 16:07:16 -- scripts/common.sh@392 -- # return 1 00:03:32.676 16:07:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:32.676 1+0 records in 00:03:32.676 1+0 records out 00:03:32.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424051 s, 247 MB/s 00:03:32.676 16:07:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:32.676 16:07:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:32.677 16:07:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:32.677 16:07:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:32.677 16:07:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:32.677 No valid GPT data, bailing 00:03:32.677 16:07:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:32.677 16:07:16 -- scripts/common.sh@391 -- # pt= 00:03:32.677 16:07:16 -- scripts/common.sh@392 -- # return 1 00:03:32.677 16:07:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:32.677 1+0 records in 00:03:32.677 1+0 records out 00:03:32.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412094 s, 254 MB/s 00:03:32.677 16:07:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:32.677 16:07:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:32.677 16:07:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:32.677 16:07:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:32.677 16:07:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:32.677 No valid GPT data, bailing 00:03:32.677 16:07:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:32.677 16:07:16 -- scripts/common.sh@391 -- # pt= 00:03:32.677 16:07:16 -- scripts/common.sh@392 -- # return 1 00:03:32.677 16:07:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:32.677 1+0 records in 00:03:32.677 1+0 records out 00:03:32.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00289556 s, 362 MB/s 00:03:32.677 16:07:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:32.677 16:07:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:32.677 16:07:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:32.677 16:07:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:32.677 16:07:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:32.677 No valid GPT data, bailing 00:03:32.677 16:07:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:32.935 16:07:16 -- scripts/common.sh@391 -- # pt= 00:03:32.935 16:07:16 -- scripts/common.sh@392 -- # return 1 00:03:32.935 16:07:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:32.935 1+0 records in 00:03:32.935 1+0 records out 00:03:32.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00307395 s, 341 MB/s 00:03:32.935 16:07:16 -- spdk/autotest.sh@118 -- # sync 00:03:32.935 16:07:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:32.935 16:07:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:32.935 16:07:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:34.838 16:07:18 -- spdk/autotest.sh@124 -- # uname -s 00:03:34.838 16:07:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:34.838 16:07:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:34.838 16:07:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.838 16:07:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.838 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:03:34.838 ************************************ 00:03:34.838 START TEST setup.sh 00:03:34.838 ************************************ 00:03:34.838 16:07:18 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:34.838 * Looking for test storage... 00:03:34.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:34.838 16:07:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:34.838 16:07:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:34.838 16:07:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:34.838 16:07:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.838 16:07:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.838 16:07:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.838 ************************************ 00:03:34.838 START TEST acl 00:03:34.839 ************************************ 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:34.839 * Looking for test storage... 00:03:34.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:34.839 16:07:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:34.839 16:07:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.839 16:07:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:34.839 16:07:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:34.839 16:07:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:34.839 16:07:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:34.839 16:07:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:34.839 16:07:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.839 16:07:18 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:35.406 16:07:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:35.406 16:07:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:35.406 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.406 16:07:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:35.406 16:07:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.406 16:07:19 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.343 Hugepages 00:03:36.343 node hugesize free / total 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.343 00:03:36.343 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:36.343 16:07:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:36.343 16:07:19 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.343 16:07:19 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.343 16:07:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:36.343 ************************************ 00:03:36.343 START TEST denied 00:03:36.343 ************************************ 00:03:36.343 16:07:19 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:36.343 16:07:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:36.343 16:07:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:36.343 16:07:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:36.343 16:07:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.343 16:07:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:37.279 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.279 16:07:20 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.848 00:03:37.848 real 0m1.394s 00:03:37.848 user 0m0.562s 00:03:37.848 sys 0m0.790s 00:03:37.848 16:07:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.848 16:07:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:37.848 ************************************ 00:03:37.848 END TEST denied 00:03:37.848 ************************************ 00:03:37.848 16:07:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:37.848 16:07:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:37.848 16:07:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.848 16:07:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.848 16:07:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:37.848 ************************************ 00:03:37.848 START TEST allowed 00:03:37.848 ************************************ 00:03:37.848 16:07:21 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:37.848 16:07:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:37.848 16:07:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:37.848 16:07:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:37.849 16:07:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.849 16:07:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.785 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.785 16:07:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:38.786 16:07:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.786 16:07:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.352 00:03:39.352 real 0m1.485s 00:03:39.352 user 0m0.644s 00:03:39.352 sys 0m0.808s 00:03:39.352 16:07:22 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.352 ************************************ 00:03:39.352 16:07:22 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:39.352 END TEST allowed 00:03:39.352 ************************************ 00:03:39.352 16:07:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:39.352 ************************************ 00:03:39.352 END TEST acl 00:03:39.352 ************************************ 00:03:39.352 00:03:39.352 real 0m4.623s 00:03:39.352 user 0m2.024s 00:03:39.352 sys 0m2.529s 00:03:39.352 16:07:22 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.352 16:07:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.352 16:07:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:39.352 16:07:22 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:39.352 16:07:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.352 16:07:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.352 16:07:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.352 ************************************ 00:03:39.352 START TEST hugepages 00:03:39.352 ************************************ 00:03:39.352 16:07:23 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:39.352 * Looking for test storage... 00:03:39.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.611 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6019984 kB' 'MemAvailable: 7399760 kB' 'Buffers: 2436 kB' 'Cached: 1594120 kB' 'SwapCached: 0 kB' 'Active: 435980 kB' 'Inactive: 1265208 kB' 'Active(anon): 115120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106356 kB' 'Mapped: 48648 kB' 'Shmem: 10488 kB' 'KReclaimable: 61292 kB' 'Slab: 132444 kB' 'SReclaimable: 61292 kB' 'SUnreclaim: 71152 kB' 'KernelStack: 6308 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.612 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:39.613 16:07:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:39.614 16:07:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.614 16:07:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.614 16:07:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.614 ************************************ 00:03:39.614 START TEST default_setup 00:03:39.614 ************************************ 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.614 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:40.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.181 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.181 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.445 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:40.445 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:40.445 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8081104 kB' 'MemAvailable: 9460788 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452984 kB' 'Inactive: 1265208 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123248 kB' 'Mapped: 48688 kB' 'Shmem: 10468 kB' 'KReclaimable: 61112 kB' 'Slab: 132276 kB' 'SReclaimable: 61112 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6256 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.446 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.447 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8080604 kB' 'MemAvailable: 9460252 kB' 'Buffers: 2436 kB' 'Cached: 1594104 kB' 'SwapCached: 0 kB' 'Active: 452516 kB' 'Inactive: 1265212 kB' 'Active(anon): 131656 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122856 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132112 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71084 kB' 'KernelStack: 6272 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.448 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.449 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8080604 kB' 'MemAvailable: 9460252 kB' 'Buffers: 2436 kB' 'Cached: 1594104 kB' 'SwapCached: 0 kB' 'Active: 452776 kB' 'Inactive: 1265212 kB' 'Active(anon): 131916 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123116 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132088 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71060 kB' 'KernelStack: 6272 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.450 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.451 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:40.452 nr_hugepages=1024 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.452 resv_hugepages=0 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.452 surplus_hugepages=0 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.452 anon_hugepages=0 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8080352 kB' 'MemAvailable: 9460000 kB' 'Buffers: 2436 kB' 'Cached: 1594104 kB' 'SwapCached: 0 kB' 'Active: 452584 kB' 'Inactive: 1265212 kB' 'Active(anon): 131724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132096 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71068 kB' 'KernelStack: 6240 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.452 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.453 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8080352 kB' 'MemUsed: 4161628 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1265212 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1596540 kB' 'Mapped: 48588 kB' 'AnonPages: 122740 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 132096 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.454 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.455 node0=1024 expecting 1024 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.455 00:03:40.455 real 0m0.959s 00:03:40.455 user 0m0.450s 00:03:40.455 sys 0m0.461s 00:03:40.455 16:07:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.456 16:07:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:40.456 ************************************ 00:03:40.456 END TEST default_setup 00:03:40.456 ************************************ 00:03:40.456 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:40.456 16:07:24 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:40.456 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.456 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.456 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.456 ************************************ 00:03:40.456 START TEST per_node_1G_alloc 00:03:40.456 ************************************ 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.456 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.030 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.030 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9128260 kB' 'MemAvailable: 10507916 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452932 kB' 'Inactive: 1265220 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132060 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71032 kB' 'KernelStack: 6228 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.030 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.031 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9128260 kB' 'MemAvailable: 10507916 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 1265220 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132068 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71040 kB' 'KernelStack: 6244 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.032 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.033 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9128512 kB' 'MemAvailable: 10508168 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452428 kB' 'Inactive: 1265220 kB' 'Active(anon): 131568 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122640 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132056 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71028 kB' 'KernelStack: 6256 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.036 nr_hugepages=512 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:41.036 resv_hugepages=0 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.036 surplus_hugepages=0 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.036 anon_hugepages=0 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9128512 kB' 'MemAvailable: 10508168 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1265220 kB' 'Active(anon): 131544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122616 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132052 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6240 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9128512 kB' 'MemUsed: 3113468 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1265220 kB' 'Active(anon): 131544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1596544 kB' 'Mapped: 48596 kB' 'AnonPages: 122876 kB' 'Shmem: 10464 kB' 'KernelStack: 6308 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 132052 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.039 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.040 node0=512 expecting 512 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.040 00:03:41.040 real 0m0.502s 00:03:41.040 user 0m0.248s 00:03:41.040 sys 0m0.282s 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.040 16:07:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.040 ************************************ 00:03:41.040 END TEST per_node_1G_alloc 00:03:41.040 ************************************ 00:03:41.040 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:41.040 16:07:24 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:41.040 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.040 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.040 16:07:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.040 ************************************ 00:03:41.040 START TEST even_2G_alloc 00:03:41.040 ************************************ 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.040 16:07:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.562 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.562 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.562 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8087340 kB' 'MemAvailable: 9466996 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452952 kB' 'Inactive: 1265220 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132040 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71012 kB' 'KernelStack: 6276 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.563 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8087340 kB' 'MemAvailable: 9466996 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452536 kB' 'Inactive: 1265220 kB' 'Active(anon): 131676 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123048 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132036 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71008 kB' 'KernelStack: 6228 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.564 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.565 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8087340 kB' 'MemAvailable: 9466996 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452536 kB' 'Inactive: 1265220 kB' 'Active(anon): 131676 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123048 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132036 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71008 kB' 'KernelStack: 6228 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.566 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.567 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.568 nr_hugepages=1024 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.568 resv_hugepages=0 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.568 surplus_hugepages=0 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.568 anon_hugepages=0 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8087340 kB' 'MemAvailable: 9466996 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1265220 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122852 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132040 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71012 kB' 'KernelStack: 6224 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 364656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.568 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.569 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8087340 kB' 'MemUsed: 4154640 kB' 'SwapCached: 0 kB' 'Active: 452140 kB' 'Inactive: 1265220 kB' 'Active(anon): 131280 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1596544 kB' 'Mapped: 48632 kB' 'AnonPages: 122388 kB' 'Shmem: 10464 kB' 'KernelStack: 6244 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 132028 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.570 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.571 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.572 node0=1024 expecting 1024 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.572 00:03:41.572 real 0m0.508s 00:03:41.572 user 0m0.269s 00:03:41.572 sys 0m0.271s 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.572 16:07:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.572 ************************************ 00:03:41.572 END TEST even_2G_alloc 00:03:41.572 ************************************ 00:03:41.572 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:41.572 16:07:25 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:41.572 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.572 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.572 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.572 ************************************ 00:03:41.572 START TEST odd_alloc 00:03:41.572 ************************************ 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.572 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.146 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.146 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.146 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8082668 kB' 'MemAvailable: 9462324 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452836 kB' 'Inactive: 1265220 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123344 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132000 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 70972 kB' 'KernelStack: 6228 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.147 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8082820 kB' 'MemAvailable: 9462476 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1265220 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122720 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 131988 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 70960 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.148 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.149 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8090844 kB' 'MemAvailable: 9470500 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452476 kB' 'Inactive: 1265220 kB' 'Active(anon): 131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122740 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 131988 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 70960 kB' 'KernelStack: 6272 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.150 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.151 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.152 nr_hugepages=1025 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:42.152 resv_hugepages=0 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.152 surplus_hugepages=0 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.152 anon_hugepages=0 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.152 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8090100 kB' 'MemAvailable: 9469756 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452808 kB' 'Inactive: 1265220 kB' 'Active(anon): 131948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 131988 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 70960 kB' 'KernelStack: 6320 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.153 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8092668 kB' 'MemUsed: 4149312 kB' 'SwapCached: 0 kB' 'Active: 452408 kB' 'Inactive: 1265220 kB' 'Active(anon): 131548 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1596544 kB' 'Mapped: 48592 kB' 'AnonPages: 123020 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 131976 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 70948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.154 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.155 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.156 node0=1025 expecting 1025 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:42.156 00:03:42.156 real 0m0.520s 00:03:42.156 user 0m0.259s 00:03:42.156 sys 0m0.293s 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.156 16:07:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.156 ************************************ 00:03:42.156 END TEST odd_alloc 00:03:42.156 ************************************ 00:03:42.156 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.156 16:07:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:42.156 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.156 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.156 16:07:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.156 ************************************ 00:03:42.156 START TEST custom_alloc 00:03:42.156 ************************************ 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.156 16:07:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.731 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.731 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9141884 kB' 'MemAvailable: 10521540 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 453556 kB' 'Inactive: 1265220 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123844 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132028 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71000 kB' 'KernelStack: 6276 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 365492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.732 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9141384 kB' 'MemAvailable: 10521040 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452568 kB' 'Inactive: 1265220 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123108 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132056 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71028 kB' 'KernelStack: 6256 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.733 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.734 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9141384 kB' 'MemAvailable: 10521040 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 1265220 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122932 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132056 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71028 kB' 'KernelStack: 6288 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.735 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.736 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.737 nr_hugepages=512 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:42.737 resv_hugepages=0 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.737 surplus_hugepages=0 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.737 anon_hugepages=0 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9141384 kB' 'MemAvailable: 10521040 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452440 kB' 'Inactive: 1265220 kB' 'Active(anon): 131580 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132056 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71028 kB' 'KernelStack: 6304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.737 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.738 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9141384 kB' 'MemUsed: 3100596 kB' 'SwapCached: 0 kB' 'Active: 452388 kB' 'Inactive: 1265220 kB' 'Active(anon): 131528 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1596544 kB' 'Mapped: 48592 kB' 'AnonPages: 122944 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 132044 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.739 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.740 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.741 node0=512 expecting 512 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.741 00:03:42.741 real 0m0.525s 00:03:42.741 user 0m0.289s 00:03:42.741 sys 0m0.268s 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.741 16:07:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.741 ************************************ 00:03:42.741 END TEST custom_alloc 00:03:42.741 ************************************ 00:03:42.741 16:07:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.741 16:07:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:42.741 16:07:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.741 16:07:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.741 16:07:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.741 ************************************ 00:03:42.741 START TEST no_shrink_alloc 00:03:42.741 ************************************ 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.741 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.314 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.314 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095908 kB' 'MemAvailable: 9475564 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452968 kB' 'Inactive: 1265220 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123104 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132152 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71124 kB' 'KernelStack: 6308 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.314 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.315 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095908 kB' 'MemAvailable: 9475564 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452744 kB' 'Inactive: 1265220 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132152 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71124 kB' 'KernelStack: 6276 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.316 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.317 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095908 kB' 'MemAvailable: 9475564 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1265220 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122748 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132136 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71108 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.318 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.319 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.320 nr_hugepages=1024 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.320 resv_hugepages=0 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.320 surplus_hugepages=0 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.320 anon_hugepages=0 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095404 kB' 'MemAvailable: 9475060 kB' 'Buffers: 2436 kB' 'Cached: 1594108 kB' 'SwapCached: 0 kB' 'Active: 452436 kB' 'Inactive: 1265220 kB' 'Active(anon): 131576 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122740 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132136 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71108 kB' 'KernelStack: 6288 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.320 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.321 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095152 kB' 'MemUsed: 4146828 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1265220 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1596544 kB' 'Mapped: 48592 kB' 'AnonPages: 122792 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 132136 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.322 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.323 node0=1024 expecting 1024 00:03:43.323 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.324 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.324 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:43.324 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:43.324 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:43.324 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.324 16:07:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.583 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.583 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.583 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095368 kB' 'MemAvailable: 9475028 kB' 'Buffers: 2436 kB' 'Cached: 1594112 kB' 'SwapCached: 0 kB' 'Active: 453472 kB' 'Inactive: 1265224 kB' 'Active(anon): 132612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132132 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71104 kB' 'KernelStack: 6276 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.583 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.584 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095368 kB' 'MemAvailable: 9475028 kB' 'Buffers: 2436 kB' 'Cached: 1594112 kB' 'SwapCached: 0 kB' 'Active: 452724 kB' 'Inactive: 1265224 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132140 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71112 kB' 'KernelStack: 6256 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.847 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.848 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095368 kB' 'MemAvailable: 9475028 kB' 'Buffers: 2436 kB' 'Cached: 1594112 kB' 'SwapCached: 0 kB' 'Active: 452692 kB' 'Inactive: 1265224 kB' 'Active(anon): 131832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122996 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132180 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71152 kB' 'KernelStack: 6272 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.849 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.850 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.851 nr_hugepages=1024 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.851 resv_hugepages=0 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.851 surplus_hugepages=0 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.851 anon_hugepages=0 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095368 kB' 'MemAvailable: 9475028 kB' 'Buffers: 2436 kB' 'Cached: 1594112 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1265224 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61028 kB' 'Slab: 132180 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71152 kB' 'KernelStack: 6272 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 362844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.851 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.852 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8095368 kB' 'MemUsed: 4146612 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1265224 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1596548 kB' 'Mapped: 48592 kB' 'AnonPages: 122696 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61028 kB' 'Slab: 132180 kB' 'SReclaimable: 61028 kB' 'SUnreclaim: 71152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.853 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.854 node0=1024 expecting 1024 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.854 00:03:43.854 real 0m1.021s 00:03:43.854 user 0m0.492s 00:03:43.854 sys 0m0.594s 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.854 16:07:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.854 ************************************ 00:03:43.854 END TEST no_shrink_alloc 00:03:43.855 ************************************ 00:03:43.855 16:07:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.855 16:07:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.855 00:03:43.855 real 0m4.466s 00:03:43.855 user 0m2.172s 00:03:43.855 sys 0m2.412s 00:03:43.855 16:07:27 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.855 16:07:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.855 ************************************ 00:03:43.855 END TEST hugepages 00:03:43.855 ************************************ 00:03:43.855 16:07:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:43.855 16:07:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:43.855 16:07:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.855 16:07:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.855 16:07:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.855 ************************************ 00:03:43.855 START TEST driver 00:03:43.855 ************************************ 00:03:43.855 16:07:27 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:44.114 * Looking for test storage... 00:03:44.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.114 16:07:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:44.114 16:07:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.114 16:07:27 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.682 16:07:28 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:44.682 16:07:28 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.682 16:07:28 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.682 16:07:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:44.682 ************************************ 00:03:44.682 START TEST guess_driver 00:03:44.682 ************************************ 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:44.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:44.682 Looking for driver=uio_pci_generic 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.682 16:07:28 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:45.299 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:45.300 16:07:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.300 16:07:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:45.300 16:07:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:45.300 16:07:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.300 16:07:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.874 00:03:45.874 real 0m1.392s 00:03:45.874 user 0m0.528s 00:03:45.874 sys 0m0.845s 00:03:45.874 16:07:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.874 16:07:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.874 ************************************ 00:03:45.874 END TEST guess_driver 00:03:45.874 ************************************ 00:03:45.874 16:07:29 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:45.874 00:03:45.874 real 0m2.077s 00:03:45.874 user 0m0.755s 00:03:45.874 sys 0m1.348s 00:03:45.874 16:07:29 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.874 16:07:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.874 ************************************ 00:03:45.874 END TEST driver 00:03:45.874 ************************************ 00:03:46.134 16:07:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.134 16:07:29 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:46.134 16:07:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.134 16:07:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.134 16:07:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.134 ************************************ 00:03:46.134 START TEST devices 00:03:46.134 ************************************ 00:03:46.134 16:07:29 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:46.134 * Looking for test storage... 00:03:46.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:46.134 16:07:29 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.134 16:07:29 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.134 16:07:29 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.134 16:07:29 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.070 16:07:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:47.070 No valid GPT data, bailing 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:47.070 16:07:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:47.070 16:07:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:47.070 No valid GPT data, bailing 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:47.070 16:07:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:47.070 16:07:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:47.070 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:47.070 No valid GPT data, bailing 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:47.070 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:47.071 16:07:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:47.071 16:07:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:47.071 16:07:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:47.071 16:07:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:47.071 16:07:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:47.071 16:07:30 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:47.071 No valid GPT data, bailing 00:03:47.071 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.071 16:07:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:47.071 16:07:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:47.071 16:07:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:47.071 16:07:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:47.071 16:07:30 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:47.071 16:07:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:47.071 16:07:30 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.071 16:07:30 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.071 16:07:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:47.071 ************************************ 00:03:47.071 START TEST nvme_mount 00:03:47.071 ************************************ 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:47.071 16:07:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:48.449 Creating new GPT entries in memory. 00:03:48.449 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:48.449 other utilities. 00:03:48.449 16:07:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:48.449 16:07:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.449 16:07:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.449 16:07:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.449 16:07:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:49.384 Creating new GPT entries in memory. 00:03:49.384 The operation has completed successfully. 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57010 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.384 16:07:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.384 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.384 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.384 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.384 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.384 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.385 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.642 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.642 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.210 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:50.210 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:50.210 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.210 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.210 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.469 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.469 16:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.469 16:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:50.728 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.728 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:50.728 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:50.728 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.728 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.728 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.987 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.987 00:03:50.987 real 0m3.933s 00:03:50.987 user 0m0.685s 00:03:50.987 sys 0m0.994s 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.987 16:07:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:50.987 ************************************ 00:03:50.987 END TEST nvme_mount 00:03:50.987 ************************************ 00:03:51.246 16:07:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:51.246 16:07:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:51.246 16:07:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.246 16:07:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.246 16:07:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.246 ************************************ 00:03:51.246 START TEST dm_mount 00:03:51.246 ************************************ 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.246 16:07:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:52.202 Creating new GPT entries in memory. 00:03:52.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.202 other utilities. 00:03:52.202 16:07:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.202 16:07:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.202 16:07:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.202 16:07:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.202 16:07:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:53.138 Creating new GPT entries in memory. 00:03:53.138 The operation has completed successfully. 00:03:53.138 16:07:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.138 16:07:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.138 16:07:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.138 16:07:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.138 16:07:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:54.513 The operation has completed successfully. 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57444 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.513 16:07:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.513 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.513 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:54.513 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:54.513 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.513 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.513 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.514 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.514 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.772 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.772 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.772 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.772 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:54.772 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.773 16:07:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.032 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:55.291 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:55.291 00:03:55.291 real 0m4.168s 00:03:55.291 user 0m0.424s 00:03:55.291 sys 0m0.686s 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.291 16:07:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:55.291 ************************************ 00:03:55.291 END TEST dm_mount 00:03:55.291 ************************************ 00:03:55.291 16:07:38 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.291 16:07:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.550 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:55.550 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:55.550 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.550 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.550 16:07:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:55.550 00:03:55.550 real 0m9.596s 00:03:55.550 user 0m1.737s 00:03:55.550 sys 0m2.248s 00:03:55.550 16:07:39 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.550 16:07:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.550 ************************************ 00:03:55.550 END TEST devices 00:03:55.550 ************************************ 00:03:55.809 16:07:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.809 00:03:55.809 real 0m21.045s 00:03:55.809 user 0m6.795s 00:03:55.809 sys 0m8.701s 00:03:55.809 16:07:39 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.809 16:07:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.809 ************************************ 00:03:55.809 END TEST setup.sh 00:03:55.809 ************************************ 00:03:55.809 16:07:39 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.809 16:07:39 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:56.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.376 Hugepages 00:03:56.376 node hugesize free / total 00:03:56.376 node0 1048576kB 0 / 0 00:03:56.376 node0 2048kB 2048 / 2048 00:03:56.376 00:03:56.376 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.376 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:56.376 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:56.634 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:56.634 16:07:40 -- spdk/autotest.sh@130 -- # uname -s 00:03:56.634 16:07:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:56.634 16:07:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:56.634 16:07:40 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.202 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.462 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.462 16:07:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:58.398 16:07:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:58.398 16:07:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:58.398 16:07:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.398 16:07:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:58.398 16:07:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:58.398 16:07:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:58.398 16:07:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.398 16:07:42 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:58.398 16:07:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:58.398 16:07:42 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:58.398 16:07:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:58.398 16:07:42 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.964 Waiting for block devices as requested 00:03:58.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:58.964 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:58.964 16:07:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:58.964 16:07:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:58.964 16:07:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:03:58.964 16:07:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:58.964 16:07:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:03:58.964 16:07:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:58.964 16:07:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:58.964 16:07:42 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:58.964 16:07:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:58.964 16:07:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:58.964 16:07:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:03:58.964 16:07:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:58.964 16:07:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:59.222 16:07:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:59.222 16:07:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:59.222 16:07:42 -- common/autotest_common.sh@1557 -- # continue 00:03:59.222 16:07:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:59.222 16:07:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:59.222 16:07:42 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:03:59.222 16:07:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:59.222 16:07:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:59.222 16:07:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:59.222 16:07:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:59.222 16:07:42 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:59.222 16:07:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:59.222 16:07:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:59.222 16:07:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:59.222 16:07:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:59.222 16:07:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:59.222 16:07:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:59.222 16:07:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:59.222 16:07:42 -- common/autotest_common.sh@1557 -- # continue 00:03:59.222 16:07:42 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:59.222 16:07:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.222 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:59.223 16:07:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:59.223 16:07:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.223 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:59.223 16:07:42 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.787 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.044 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.044 16:07:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:00.044 16:07:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.044 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:00.044 16:07:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:00.044 16:07:43 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:00.044 16:07:43 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:00.044 16:07:43 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:00.044 16:07:43 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:00.044 16:07:43 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:00.044 16:07:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:00.044 16:07:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:00.044 16:07:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.044 16:07:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:00.044 16:07:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:00.044 16:07:43 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:00.044 16:07:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:00.044 16:07:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:00.044 16:07:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:00.044 16:07:43 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:00.044 16:07:43 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.044 16:07:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:00.044 16:07:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:00.044 16:07:43 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:00.044 16:07:43 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.044 16:07:43 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:00.044 16:07:43 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:00.044 16:07:43 -- common/autotest_common.sh@1593 -- # return 0 00:04:00.044 16:07:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:00.044 16:07:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:00.044 16:07:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.044 16:07:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.044 16:07:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:00.044 16:07:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.044 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:00.044 16:07:43 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:00.044 16:07:43 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:00.044 16:07:43 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:00.044 16:07:43 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:00.044 16:07:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.044 16:07:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.044 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:00.044 ************************************ 00:04:00.044 START TEST env 00:04:00.044 ************************************ 00:04:00.044 16:07:43 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:00.302 * Looking for test storage... 00:04:00.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:00.302 16:07:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:00.302 16:07:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.302 16:07:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.302 16:07:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.302 ************************************ 00:04:00.302 START TEST env_memory 00:04:00.302 ************************************ 00:04:00.302 16:07:43 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:00.302 00:04:00.302 00:04:00.302 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.302 http://cunit.sourceforge.net/ 00:04:00.302 00:04:00.302 00:04:00.302 Suite: memory 00:04:00.302 Test: alloc and free memory map ...[2024-07-12 16:07:43.838118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.302 passed 00:04:00.302 Test: mem map translation ...[2024-07-12 16:07:43.868714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.302 [2024-07-12 16:07:43.868759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.302 [2024-07-12 16:07:43.868815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.302 [2024-07-12 16:07:43.868827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.302 passed 00:04:00.302 Test: mem map registration ...[2024-07-12 16:07:43.932553] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:00.303 [2024-07-12 16:07:43.932603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:00.303 passed 00:04:00.303 Test: mem map adjacent registrations ...passed 00:04:00.303 00:04:00.303 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.303 suites 1 1 n/a 0 0 00:04:00.303 tests 4 4 4 0 0 00:04:00.303 asserts 152 152 152 0 n/a 00:04:00.303 00:04:00.303 Elapsed time = 0.214 seconds 00:04:00.303 00:04:00.303 real 0m0.228s 00:04:00.303 user 0m0.215s 00:04:00.303 sys 0m0.010s 00:04:00.303 16:07:44 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.303 16:07:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:00.303 ************************************ 00:04:00.303 END TEST env_memory 00:04:00.303 ************************************ 00:04:00.560 16:07:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.560 16:07:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:00.560 16:07:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.560 16:07:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.560 16:07:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.560 ************************************ 00:04:00.560 START TEST env_vtophys 00:04:00.560 ************************************ 00:04:00.560 16:07:44 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:00.560 EAL: lib.eal log level changed from notice to debug 00:04:00.560 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 1 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 2 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 3 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 4 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 5 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 6 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 7 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 8 as core 0 on socket 0 00:04:00.560 EAL: Detected lcore 9 as core 0 on socket 0 00:04:00.560 EAL: Maximum logical cores by configuration: 128 00:04:00.560 EAL: Detected CPU lcores: 10 00:04:00.560 EAL: Detected NUMA nodes: 1 00:04:00.560 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:00.560 EAL: Detected shared linkage of DPDK 00:04:00.560 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.560 EAL: Selected IOVA mode 'PA' 00:04:00.560 EAL: Probing VFIO support... 00:04:00.560 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:00.560 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:00.560 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.560 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.560 EAL: Setting up physically contiguous memory... 00:04:00.560 EAL: Setting maximum number of open files to 524288 00:04:00.560 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.560 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.560 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.560 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.560 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.560 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.560 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.560 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.560 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.560 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.560 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.560 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.561 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.561 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.561 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.561 EAL: Hugepages will be freed exactly as allocated. 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: TSC frequency is ~2200000 KHz 00:04:00.561 EAL: Main lcore 0 is ready (tid=7fb072a1da00;cpuset=[0]) 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 0 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.561 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:00.561 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.561 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.561 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:00.561 00:04:00.561 00:04:00.561 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.561 http://cunit.sourceforge.net/ 00:04:00.561 00:04:00.561 00:04:00.561 Suite: components_suite 00:04:00.561 Test: vtophys_malloc_test ...passed 00:04:00.561 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.561 EAL: Trying to obtain current memory policy. 00:04:00.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.561 EAL: Restoring previous memory policy: 4 00:04:00.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.561 EAL: request: mp_malloc_sync 00:04:00.561 EAL: No shared files mode enabled, IPC is disabled 00:04:00.561 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.818 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.818 EAL: request: mp_malloc_sync 00:04:00.818 EAL: No shared files mode enabled, IPC is disabled 00:04:00.818 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.818 EAL: Trying to obtain current memory policy. 00:04:00.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.818 EAL: Restoring previous memory policy: 4 00:04:00.818 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.818 EAL: request: mp_malloc_sync 00:04:00.818 EAL: No shared files mode enabled, IPC is disabled 00:04:00.819 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.819 EAL: request: mp_malloc_sync 00:04:00.819 EAL: No shared files mode enabled, IPC is disabled 00:04:00.819 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.819 EAL: Trying to obtain current memory policy. 00:04:00.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.819 EAL: Restoring previous memory policy: 4 00:04:00.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.819 EAL: request: mp_malloc_sync 00:04:00.819 EAL: No shared files mode enabled, IPC is disabled 00:04:00.819 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.077 EAL: request: mp_malloc_sync 00:04:01.077 EAL: No shared files mode enabled, IPC is disabled 00:04:01.077 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.077 EAL: Trying to obtain current memory policy. 00:04:01.077 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.077 EAL: Restoring previous memory policy: 4 00:04:01.077 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.077 EAL: request: mp_malloc_sync 00:04:01.077 EAL: No shared files mode enabled, IPC is disabled 00:04:01.077 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.077 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.336 passed 00:04:01.336 00:04:01.336 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.336 suites 1 1 n/a 0 0 00:04:01.336 tests 2 2 2 0 0 00:04:01.336 asserts 5253 5253 5253 0 n/a 00:04:01.336 00:04:01.336 Elapsed time = 0.634 seconds 00:04:01.336 EAL: request: mp_malloc_sync 00:04:01.336 EAL: No shared files mode enabled, IPC is disabled 00:04:01.336 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.336 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.336 EAL: request: mp_malloc_sync 00:04:01.336 EAL: No shared files mode enabled, IPC is disabled 00:04:01.336 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.336 EAL: No shared files mode enabled, IPC is disabled 00:04:01.336 EAL: No shared files mode enabled, IPC is disabled 00:04:01.336 EAL: No shared files mode enabled, IPC is disabled 00:04:01.336 00:04:01.336 real 0m0.818s 00:04:01.336 user 0m0.427s 00:04:01.337 sys 0m0.266s 00:04:01.337 16:07:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.337 16:07:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.337 ************************************ 00:04:01.337 END TEST env_vtophys 00:04:01.337 ************************************ 00:04:01.337 16:07:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:01.337 16:07:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.337 16:07:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.337 16:07:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.337 16:07:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.337 ************************************ 00:04:01.337 START TEST env_pci 00:04:01.337 ************************************ 00:04:01.337 16:07:44 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.337 00:04:01.337 00:04:01.337 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.337 http://cunit.sourceforge.net/ 00:04:01.337 00:04:01.337 00:04:01.337 Suite: pci 00:04:01.337 Test: pci_hook ...[2024-07-12 16:07:44.948012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58625 has claimed it 00:04:01.337 passed 00:04:01.337 00:04:01.337 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.337 suites 1 1 n/a 0 0 00:04:01.337 tests 1 1 1 0 0 00:04:01.337 asserts 25 25 25 0 n/a 00:04:01.337 00:04:01.337 Elapsed time = 0.002 seconds 00:04:01.337 EAL: Cannot find device (10000:00:01.0) 00:04:01.337 EAL: Failed to attach device on primary process 00:04:01.337 00:04:01.337 real 0m0.018s 00:04:01.337 user 0m0.011s 00:04:01.337 sys 0m0.007s 00:04:01.337 16:07:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.337 16:07:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.337 ************************************ 00:04:01.337 END TEST env_pci 00:04:01.337 ************************************ 00:04:01.337 16:07:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:01.337 16:07:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.337 16:07:44 env -- env/env.sh@15 -- # uname 00:04:01.337 16:07:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.337 16:07:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.337 16:07:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.337 16:07:44 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:01.337 16:07:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.337 16:07:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.337 ************************************ 00:04:01.337 START TEST env_dpdk_post_init 00:04:01.337 ************************************ 00:04:01.337 16:07:45 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.337 EAL: Detected CPU lcores: 10 00:04:01.337 EAL: Detected NUMA nodes: 1 00:04:01.337 EAL: Detected shared linkage of DPDK 00:04:01.337 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.337 EAL: Selected IOVA mode 'PA' 00:04:01.596 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.596 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:01.596 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:01.596 Starting DPDK initialization... 00:04:01.596 Starting SPDK post initialization... 00:04:01.596 SPDK NVMe probe 00:04:01.596 Attaching to 0000:00:10.0 00:04:01.596 Attaching to 0000:00:11.0 00:04:01.596 Attached to 0000:00:10.0 00:04:01.596 Attached to 0000:00:11.0 00:04:01.596 Cleaning up... 00:04:01.596 00:04:01.596 real 0m0.166s 00:04:01.596 user 0m0.045s 00:04:01.596 sys 0m0.022s 00:04:01.596 16:07:45 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.596 16:07:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.596 ************************************ 00:04:01.596 END TEST env_dpdk_post_init 00:04:01.596 ************************************ 00:04:01.596 16:07:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:01.596 16:07:45 env -- env/env.sh@26 -- # uname 00:04:01.596 16:07:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:01.596 16:07:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:01.596 16:07:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.596 16:07:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.596 16:07:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.596 ************************************ 00:04:01.596 START TEST env_mem_callbacks 00:04:01.596 ************************************ 00:04:01.596 16:07:45 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:01.596 EAL: Detected CPU lcores: 10 00:04:01.596 EAL: Detected NUMA nodes: 1 00:04:01.596 EAL: Detected shared linkage of DPDK 00:04:01.596 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.596 EAL: Selected IOVA mode 'PA' 00:04:01.855 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.855 00:04:01.855 00:04:01.855 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.855 http://cunit.sourceforge.net/ 00:04:01.855 00:04:01.855 00:04:01.855 Suite: memory 00:04:01.855 Test: test ... 00:04:01.855 register 0x200000200000 2097152 00:04:01.855 malloc 3145728 00:04:01.855 register 0x200000400000 4194304 00:04:01.855 buf 0x200000500000 len 3145728 PASSED 00:04:01.855 malloc 64 00:04:01.855 buf 0x2000004fff40 len 64 PASSED 00:04:01.855 malloc 4194304 00:04:01.855 register 0x200000800000 6291456 00:04:01.855 buf 0x200000a00000 len 4194304 PASSED 00:04:01.855 free 0x200000500000 3145728 00:04:01.855 free 0x2000004fff40 64 00:04:01.855 unregister 0x200000400000 4194304 PASSED 00:04:01.855 free 0x200000a00000 4194304 00:04:01.855 unregister 0x200000800000 6291456 PASSED 00:04:01.855 malloc 8388608 00:04:01.855 register 0x200000400000 10485760 00:04:01.855 buf 0x200000600000 len 8388608 PASSED 00:04:01.855 free 0x200000600000 8388608 00:04:01.855 unregister 0x200000400000 10485760 PASSED 00:04:01.855 passed 00:04:01.855 00:04:01.855 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.855 suites 1 1 n/a 0 0 00:04:01.855 tests 1 1 1 0 0 00:04:01.855 asserts 15 15 15 0 n/a 00:04:01.855 00:04:01.855 Elapsed time = 0.007 seconds 00:04:01.855 00:04:01.855 real 0m0.140s 00:04:01.855 user 0m0.019s 00:04:01.855 sys 0m0.020s 00:04:01.855 16:07:45 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.855 16:07:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:01.855 ************************************ 00:04:01.855 END TEST env_mem_callbacks 00:04:01.855 ************************************ 00:04:01.855 16:07:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:01.855 00:04:01.855 real 0m1.708s 00:04:01.855 user 0m0.839s 00:04:01.855 sys 0m0.527s 00:04:01.855 16:07:45 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.855 16:07:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.855 ************************************ 00:04:01.855 END TEST env 00:04:01.855 ************************************ 00:04:01.855 16:07:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.855 16:07:45 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.855 16:07:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.855 16:07:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.855 16:07:45 -- common/autotest_common.sh@10 -- # set +x 00:04:01.855 ************************************ 00:04:01.855 START TEST rpc 00:04:01.855 ************************************ 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.855 * Looking for test storage... 00:04:01.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.855 16:07:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58735 00:04:01.855 16:07:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.855 16:07:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:01.855 16:07:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58735 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@829 -- # '[' -z 58735 ']' 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.855 16:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.115 [2024-07-12 16:07:45.606586] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:02.115 [2024-07-12 16:07:45.606736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58735 ] 00:04:02.115 [2024-07-12 16:07:45.741448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.115 [2024-07-12 16:07:45.796056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.115 [2024-07-12 16:07:45.796123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58735' to capture a snapshot of events at runtime. 00:04:02.115 [2024-07-12 16:07:45.796149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.115 [2024-07-12 16:07:45.796156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.115 [2024-07-12 16:07:45.796162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58735 for offline analysis/debug. 00:04:02.115 [2024-07-12 16:07:45.796193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.115 [2024-07-12 16:07:45.824234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:03.051 16:07:46 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.051 16:07:46 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:03.051 16:07:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:03.051 16:07:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:03.051 16:07:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:03.051 16:07:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:03.051 16:07:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.051 16:07:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.051 16:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.051 ************************************ 00:04:03.051 START TEST rpc_integrity 00:04:03.051 ************************************ 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.051 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.051 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.051 { 00:04:03.051 "name": "Malloc0", 00:04:03.051 "aliases": [ 00:04:03.051 "d756ced0-2cab-477c-89db-0f49adb9e40a" 00:04:03.051 ], 00:04:03.051 "product_name": "Malloc disk", 00:04:03.051 "block_size": 512, 00:04:03.051 "num_blocks": 16384, 00:04:03.051 "uuid": "d756ced0-2cab-477c-89db-0f49adb9e40a", 00:04:03.051 "assigned_rate_limits": { 00:04:03.051 "rw_ios_per_sec": 0, 00:04:03.051 "rw_mbytes_per_sec": 0, 00:04:03.051 "r_mbytes_per_sec": 0, 00:04:03.051 "w_mbytes_per_sec": 0 00:04:03.051 }, 00:04:03.051 "claimed": false, 00:04:03.051 "zoned": false, 00:04:03.051 "supported_io_types": { 00:04:03.051 "read": true, 00:04:03.051 "write": true, 00:04:03.051 "unmap": true, 00:04:03.051 "flush": true, 00:04:03.051 "reset": true, 00:04:03.051 "nvme_admin": false, 00:04:03.051 "nvme_io": false, 00:04:03.051 "nvme_io_md": false, 00:04:03.051 "write_zeroes": true, 00:04:03.051 "zcopy": true, 00:04:03.051 "get_zone_info": false, 00:04:03.051 "zone_management": false, 00:04:03.051 "zone_append": false, 00:04:03.051 "compare": false, 00:04:03.051 "compare_and_write": false, 00:04:03.051 "abort": true, 00:04:03.051 "seek_hole": false, 00:04:03.051 "seek_data": false, 00:04:03.051 "copy": true, 00:04:03.051 "nvme_iov_md": false 00:04:03.051 }, 00:04:03.051 "memory_domains": [ 00:04:03.052 { 00:04:03.052 "dma_device_id": "system", 00:04:03.052 "dma_device_type": 1 00:04:03.052 }, 00:04:03.052 { 00:04:03.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.052 "dma_device_type": 2 00:04:03.052 } 00:04:03.052 ], 00:04:03.052 "driver_specific": {} 00:04:03.052 } 00:04:03.052 ]' 00:04:03.052 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.052 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.052 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:03.052 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.052 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.052 [2024-07-12 16:07:46.705525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:03.052 [2024-07-12 16:07:46.705614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.052 [2024-07-12 16:07:46.705640] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12267c0 00:04:03.052 [2024-07-12 16:07:46.705649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.052 [2024-07-12 16:07:46.706835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.052 [2024-07-12 16:07:46.706906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.052 Passthru0 00:04:03.052 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.052 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.052 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.052 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.052 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.052 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.052 { 00:04:03.052 "name": "Malloc0", 00:04:03.052 "aliases": [ 00:04:03.052 "d756ced0-2cab-477c-89db-0f49adb9e40a" 00:04:03.052 ], 00:04:03.052 "product_name": "Malloc disk", 00:04:03.052 "block_size": 512, 00:04:03.052 "num_blocks": 16384, 00:04:03.052 "uuid": "d756ced0-2cab-477c-89db-0f49adb9e40a", 00:04:03.052 "assigned_rate_limits": { 00:04:03.052 "rw_ios_per_sec": 0, 00:04:03.052 "rw_mbytes_per_sec": 0, 00:04:03.052 "r_mbytes_per_sec": 0, 00:04:03.052 "w_mbytes_per_sec": 0 00:04:03.052 }, 00:04:03.052 "claimed": true, 00:04:03.052 "claim_type": "exclusive_write", 00:04:03.052 "zoned": false, 00:04:03.052 "supported_io_types": { 00:04:03.052 "read": true, 00:04:03.052 "write": true, 00:04:03.052 "unmap": true, 00:04:03.052 "flush": true, 00:04:03.052 "reset": true, 00:04:03.052 "nvme_admin": false, 00:04:03.052 "nvme_io": false, 00:04:03.052 "nvme_io_md": false, 00:04:03.052 "write_zeroes": true, 00:04:03.052 "zcopy": true, 00:04:03.052 "get_zone_info": false, 00:04:03.052 "zone_management": false, 00:04:03.052 "zone_append": false, 00:04:03.052 "compare": false, 00:04:03.052 "compare_and_write": false, 00:04:03.052 "abort": true, 00:04:03.052 "seek_hole": false, 00:04:03.052 "seek_data": false, 00:04:03.052 "copy": true, 00:04:03.052 "nvme_iov_md": false 00:04:03.052 }, 00:04:03.052 "memory_domains": [ 00:04:03.052 { 00:04:03.052 "dma_device_id": "system", 00:04:03.052 "dma_device_type": 1 00:04:03.052 }, 00:04:03.052 { 00:04:03.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.052 "dma_device_type": 2 00:04:03.052 } 00:04:03.052 ], 00:04:03.052 "driver_specific": {} 00:04:03.052 }, 00:04:03.052 { 00:04:03.052 "name": "Passthru0", 00:04:03.052 "aliases": [ 00:04:03.052 "2384e974-c43f-57cb-9f1a-82b3b7152371" 00:04:03.052 ], 00:04:03.052 "product_name": "passthru", 00:04:03.052 "block_size": 512, 00:04:03.052 "num_blocks": 16384, 00:04:03.052 "uuid": "2384e974-c43f-57cb-9f1a-82b3b7152371", 00:04:03.052 "assigned_rate_limits": { 00:04:03.052 "rw_ios_per_sec": 0, 00:04:03.052 "rw_mbytes_per_sec": 0, 00:04:03.052 "r_mbytes_per_sec": 0, 00:04:03.052 "w_mbytes_per_sec": 0 00:04:03.052 }, 00:04:03.052 "claimed": false, 00:04:03.052 "zoned": false, 00:04:03.052 "supported_io_types": { 00:04:03.052 "read": true, 00:04:03.052 "write": true, 00:04:03.052 "unmap": true, 00:04:03.052 "flush": true, 00:04:03.052 "reset": true, 00:04:03.052 "nvme_admin": false, 00:04:03.052 "nvme_io": false, 00:04:03.052 "nvme_io_md": false, 00:04:03.052 "write_zeroes": true, 00:04:03.052 "zcopy": true, 00:04:03.052 "get_zone_info": false, 00:04:03.052 "zone_management": false, 00:04:03.052 "zone_append": false, 00:04:03.052 "compare": false, 00:04:03.052 "compare_and_write": false, 00:04:03.052 "abort": true, 00:04:03.052 "seek_hole": false, 00:04:03.052 "seek_data": false, 00:04:03.052 "copy": true, 00:04:03.052 "nvme_iov_md": false 00:04:03.052 }, 00:04:03.052 "memory_domains": [ 00:04:03.052 { 00:04:03.052 "dma_device_id": "system", 00:04:03.052 "dma_device_type": 1 00:04:03.052 }, 00:04:03.052 { 00:04:03.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.052 "dma_device_type": 2 00:04:03.052 } 00:04:03.052 ], 00:04:03.052 "driver_specific": { 00:04:03.052 "passthru": { 00:04:03.052 "name": "Passthru0", 00:04:03.052 "base_bdev_name": "Malloc0" 00:04:03.052 } 00:04:03.052 } 00:04:03.052 } 00:04:03.052 ]' 00:04:03.052 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.311 16:07:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.311 00:04:03.311 real 0m0.330s 00:04:03.311 user 0m0.218s 00:04:03.311 sys 0m0.041s 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.311 16:07:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 ************************************ 00:04:03.311 END TEST rpc_integrity 00:04:03.311 ************************************ 00:04:03.311 16:07:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.311 16:07:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.311 16:07:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.311 16:07:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.311 16:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 ************************************ 00:04:03.311 START TEST rpc_plugins 00:04:03.311 ************************************ 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:03.311 16:07:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.311 16:07:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.311 16:07:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.311 16:07:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.311 16:07:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.311 { 00:04:03.311 "name": "Malloc1", 00:04:03.311 "aliases": [ 00:04:03.311 "4d889627-5af2-41f0-ae61-fbba9561a2c5" 00:04:03.311 ], 00:04:03.311 "product_name": "Malloc disk", 00:04:03.311 "block_size": 4096, 00:04:03.311 "num_blocks": 256, 00:04:03.311 "uuid": "4d889627-5af2-41f0-ae61-fbba9561a2c5", 00:04:03.311 "assigned_rate_limits": { 00:04:03.311 "rw_ios_per_sec": 0, 00:04:03.311 "rw_mbytes_per_sec": 0, 00:04:03.311 "r_mbytes_per_sec": 0, 00:04:03.311 "w_mbytes_per_sec": 0 00:04:03.311 }, 00:04:03.311 "claimed": false, 00:04:03.311 "zoned": false, 00:04:03.311 "supported_io_types": { 00:04:03.311 "read": true, 00:04:03.311 "write": true, 00:04:03.311 "unmap": true, 00:04:03.311 "flush": true, 00:04:03.311 "reset": true, 00:04:03.311 "nvme_admin": false, 00:04:03.311 "nvme_io": false, 00:04:03.311 "nvme_io_md": false, 00:04:03.311 "write_zeroes": true, 00:04:03.311 "zcopy": true, 00:04:03.311 "get_zone_info": false, 00:04:03.311 "zone_management": false, 00:04:03.311 "zone_append": false, 00:04:03.311 "compare": false, 00:04:03.311 "compare_and_write": false, 00:04:03.311 "abort": true, 00:04:03.311 "seek_hole": false, 00:04:03.311 "seek_data": false, 00:04:03.311 "copy": true, 00:04:03.311 "nvme_iov_md": false 00:04:03.311 }, 00:04:03.311 "memory_domains": [ 00:04:03.311 { 00:04:03.311 "dma_device_id": "system", 00:04:03.311 "dma_device_type": 1 00:04:03.311 }, 00:04:03.311 { 00:04:03.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.311 "dma_device_type": 2 00:04:03.311 } 00:04:03.311 ], 00:04:03.312 "driver_specific": {} 00:04:03.312 } 00:04:03.312 ]' 00:04:03.312 16:07:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.312 16:07:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.312 16:07:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.312 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.312 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.571 16:07:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.571 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.571 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.571 16:07:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.571 16:07:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.571 16:07:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.571 00:04:03.571 real 0m0.174s 00:04:03.571 user 0m0.116s 00:04:03.571 sys 0m0.021s 00:04:03.571 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.571 ************************************ 00:04:03.571 END TEST rpc_plugins 00:04:03.571 16:07:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 ************************************ 00:04:03.571 16:07:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.571 16:07:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.571 16:07:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.571 16:07:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.571 16:07:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 ************************************ 00:04:03.571 START TEST rpc_trace_cmd_test 00:04:03.571 ************************************ 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.571 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58735", 00:04:03.571 "tpoint_group_mask": "0x8", 00:04:03.571 "iscsi_conn": { 00:04:03.571 "mask": "0x2", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "scsi": { 00:04:03.571 "mask": "0x4", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "bdev": { 00:04:03.571 "mask": "0x8", 00:04:03.571 "tpoint_mask": "0xffffffffffffffff" 00:04:03.571 }, 00:04:03.571 "nvmf_rdma": { 00:04:03.571 "mask": "0x10", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "nvmf_tcp": { 00:04:03.571 "mask": "0x20", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "ftl": { 00:04:03.571 "mask": "0x40", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "blobfs": { 00:04:03.571 "mask": "0x80", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "dsa": { 00:04:03.571 "mask": "0x200", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "thread": { 00:04:03.571 "mask": "0x400", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "nvme_pcie": { 00:04:03.571 "mask": "0x800", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "iaa": { 00:04:03.571 "mask": "0x1000", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "nvme_tcp": { 00:04:03.571 "mask": "0x2000", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "bdev_nvme": { 00:04:03.571 "mask": "0x4000", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 }, 00:04:03.571 "sock": { 00:04:03.571 "mask": "0x8000", 00:04:03.571 "tpoint_mask": "0x0" 00:04:03.571 } 00:04:03.571 }' 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.571 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.830 00:04:03.830 real 0m0.282s 00:04:03.830 user 0m0.247s 00:04:03.830 sys 0m0.025s 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.830 ************************************ 00:04:03.830 16:07:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 END TEST rpc_trace_cmd_test 00:04:03.830 ************************************ 00:04:03.830 16:07:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.830 16:07:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.830 16:07:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.830 16:07:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.830 16:07:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.830 16:07:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.830 16:07:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 ************************************ 00:04:03.830 START TEST rpc_daemon_integrity 00:04:03.830 ************************************ 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.830 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.089 { 00:04:04.089 "name": "Malloc2", 00:04:04.089 "aliases": [ 00:04:04.089 "0413f807-4816-48c2-9f68-0e17d70c6a43" 00:04:04.089 ], 00:04:04.089 "product_name": "Malloc disk", 00:04:04.089 "block_size": 512, 00:04:04.089 "num_blocks": 16384, 00:04:04.089 "uuid": "0413f807-4816-48c2-9f68-0e17d70c6a43", 00:04:04.089 "assigned_rate_limits": { 00:04:04.089 "rw_ios_per_sec": 0, 00:04:04.089 "rw_mbytes_per_sec": 0, 00:04:04.089 "r_mbytes_per_sec": 0, 00:04:04.089 "w_mbytes_per_sec": 0 00:04:04.089 }, 00:04:04.089 "claimed": false, 00:04:04.089 "zoned": false, 00:04:04.089 "supported_io_types": { 00:04:04.089 "read": true, 00:04:04.089 "write": true, 00:04:04.089 "unmap": true, 00:04:04.089 "flush": true, 00:04:04.089 "reset": true, 00:04:04.089 "nvme_admin": false, 00:04:04.089 "nvme_io": false, 00:04:04.089 "nvme_io_md": false, 00:04:04.089 "write_zeroes": true, 00:04:04.089 "zcopy": true, 00:04:04.089 "get_zone_info": false, 00:04:04.089 "zone_management": false, 00:04:04.089 "zone_append": false, 00:04:04.089 "compare": false, 00:04:04.089 "compare_and_write": false, 00:04:04.089 "abort": true, 00:04:04.089 "seek_hole": false, 00:04:04.089 "seek_data": false, 00:04:04.089 "copy": true, 00:04:04.089 "nvme_iov_md": false 00:04:04.089 }, 00:04:04.089 "memory_domains": [ 00:04:04.089 { 00:04:04.089 "dma_device_id": "system", 00:04:04.089 "dma_device_type": 1 00:04:04.089 }, 00:04:04.089 { 00:04:04.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.089 "dma_device_type": 2 00:04:04.089 } 00:04:04.089 ], 00:04:04.089 "driver_specific": {} 00:04:04.089 } 00:04:04.089 ]' 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 [2024-07-12 16:07:47.642011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:04.089 [2024-07-12 16:07:47.642063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.089 [2024-07-12 16:07:47.642097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x134a6b0 00:04:04.089 [2024-07-12 16:07:47.642106] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.089 [2024-07-12 16:07:47.643418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.089 [2024-07-12 16:07:47.643463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.089 Passthru0 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.089 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.089 { 00:04:04.089 "name": "Malloc2", 00:04:04.089 "aliases": [ 00:04:04.089 "0413f807-4816-48c2-9f68-0e17d70c6a43" 00:04:04.089 ], 00:04:04.089 "product_name": "Malloc disk", 00:04:04.089 "block_size": 512, 00:04:04.089 "num_blocks": 16384, 00:04:04.089 "uuid": "0413f807-4816-48c2-9f68-0e17d70c6a43", 00:04:04.089 "assigned_rate_limits": { 00:04:04.089 "rw_ios_per_sec": 0, 00:04:04.089 "rw_mbytes_per_sec": 0, 00:04:04.089 "r_mbytes_per_sec": 0, 00:04:04.089 "w_mbytes_per_sec": 0 00:04:04.089 }, 00:04:04.089 "claimed": true, 00:04:04.089 "claim_type": "exclusive_write", 00:04:04.089 "zoned": false, 00:04:04.089 "supported_io_types": { 00:04:04.089 "read": true, 00:04:04.089 "write": true, 00:04:04.089 "unmap": true, 00:04:04.089 "flush": true, 00:04:04.089 "reset": true, 00:04:04.090 "nvme_admin": false, 00:04:04.090 "nvme_io": false, 00:04:04.090 "nvme_io_md": false, 00:04:04.090 "write_zeroes": true, 00:04:04.090 "zcopy": true, 00:04:04.090 "get_zone_info": false, 00:04:04.090 "zone_management": false, 00:04:04.090 "zone_append": false, 00:04:04.090 "compare": false, 00:04:04.090 "compare_and_write": false, 00:04:04.090 "abort": true, 00:04:04.090 "seek_hole": false, 00:04:04.090 "seek_data": false, 00:04:04.090 "copy": true, 00:04:04.090 "nvme_iov_md": false 00:04:04.090 }, 00:04:04.090 "memory_domains": [ 00:04:04.090 { 00:04:04.090 "dma_device_id": "system", 00:04:04.090 "dma_device_type": 1 00:04:04.090 }, 00:04:04.090 { 00:04:04.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.090 "dma_device_type": 2 00:04:04.090 } 00:04:04.090 ], 00:04:04.090 "driver_specific": {} 00:04:04.090 }, 00:04:04.090 { 00:04:04.090 "name": "Passthru0", 00:04:04.090 "aliases": [ 00:04:04.090 "b8c41c63-5d31-568a-8ec5-ec25439208a9" 00:04:04.090 ], 00:04:04.090 "product_name": "passthru", 00:04:04.090 "block_size": 512, 00:04:04.090 "num_blocks": 16384, 00:04:04.090 "uuid": "b8c41c63-5d31-568a-8ec5-ec25439208a9", 00:04:04.090 "assigned_rate_limits": { 00:04:04.090 "rw_ios_per_sec": 0, 00:04:04.090 "rw_mbytes_per_sec": 0, 00:04:04.090 "r_mbytes_per_sec": 0, 00:04:04.090 "w_mbytes_per_sec": 0 00:04:04.090 }, 00:04:04.090 "claimed": false, 00:04:04.090 "zoned": false, 00:04:04.090 "supported_io_types": { 00:04:04.090 "read": true, 00:04:04.090 "write": true, 00:04:04.090 "unmap": true, 00:04:04.090 "flush": true, 00:04:04.090 "reset": true, 00:04:04.090 "nvme_admin": false, 00:04:04.090 "nvme_io": false, 00:04:04.090 "nvme_io_md": false, 00:04:04.090 "write_zeroes": true, 00:04:04.090 "zcopy": true, 00:04:04.090 "get_zone_info": false, 00:04:04.090 "zone_management": false, 00:04:04.090 "zone_append": false, 00:04:04.090 "compare": false, 00:04:04.090 "compare_and_write": false, 00:04:04.090 "abort": true, 00:04:04.090 "seek_hole": false, 00:04:04.090 "seek_data": false, 00:04:04.090 "copy": true, 00:04:04.090 "nvme_iov_md": false 00:04:04.090 }, 00:04:04.090 "memory_domains": [ 00:04:04.090 { 00:04:04.090 "dma_device_id": "system", 00:04:04.090 "dma_device_type": 1 00:04:04.090 }, 00:04:04.090 { 00:04:04.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.090 "dma_device_type": 2 00:04:04.090 } 00:04:04.090 ], 00:04:04.090 "driver_specific": { 00:04:04.090 "passthru": { 00:04:04.090 "name": "Passthru0", 00:04:04.090 "base_bdev_name": "Malloc2" 00:04:04.090 } 00:04:04.090 } 00:04:04.090 } 00:04:04.090 ]' 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.090 00:04:04.090 real 0m0.325s 00:04:04.090 user 0m0.231s 00:04:04.090 sys 0m0.030s 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.090 ************************************ 00:04:04.090 16:07:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.090 END TEST rpc_daemon_integrity 00:04:04.090 ************************************ 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.349 16:07:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:04.349 16:07:47 rpc -- rpc/rpc.sh@84 -- # killprocess 58735 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@948 -- # '[' -z 58735 ']' 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@952 -- # kill -0 58735 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58735 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.349 killing process with pid 58735 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58735' 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@967 -- # kill 58735 00:04:04.349 16:07:47 rpc -- common/autotest_common.sh@972 -- # wait 58735 00:04:04.608 00:04:04.608 real 0m2.669s 00:04:04.608 user 0m3.664s 00:04:04.608 sys 0m0.515s 00:04:04.608 16:07:48 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.608 ************************************ 00:04:04.608 END TEST rpc 00:04:04.608 ************************************ 00:04:04.608 16:07:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.608 16:07:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.608 16:07:48 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.608 16:07:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.608 16:07:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.608 16:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:04.608 ************************************ 00:04:04.608 START TEST skip_rpc 00:04:04.608 ************************************ 00:04:04.608 16:07:48 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.608 * Looking for test storage... 00:04:04.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:04.608 16:07:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.608 16:07:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.608 16:07:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.608 16:07:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.608 16:07:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.608 16:07:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.608 ************************************ 00:04:04.608 START TEST skip_rpc 00:04:04.608 ************************************ 00:04:04.608 16:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:04.608 16:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58933 00:04:04.608 16:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.608 16:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.608 16:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.608 [2024-07-12 16:07:48.320012] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:04.608 [2024-07-12 16:07:48.320114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:04:04.867 [2024-07-12 16:07:48.452755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.867 [2024-07-12 16:07:48.510993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.867 [2024-07-12 16:07:48.540188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58933 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58933 ']' 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58933 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58933 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.169 killing process with pid 58933 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58933' 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58933 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58933 00:04:10.169 00:04:10.169 real 0m5.279s 00:04:10.169 user 0m5.022s 00:04:10.169 sys 0m0.155s 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.169 16:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.169 ************************************ 00:04:10.169 END TEST skip_rpc 00:04:10.169 ************************************ 00:04:10.169 16:07:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:10.169 16:07:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:10.169 16:07:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.169 16:07:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.169 16:07:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.169 ************************************ 00:04:10.169 START TEST skip_rpc_with_json 00:04:10.169 ************************************ 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59014 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59014 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59014 ']' 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.169 16:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.169 [2024-07-12 16:07:53.652137] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:10.169 [2024-07-12 16:07:53.652240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:04:10.169 [2024-07-12 16:07:53.785901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.169 [2024-07-12 16:07:53.843894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.169 [2024-07-12 16:07:53.874631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.106 [2024-07-12 16:07:54.644827] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:11.106 request: 00:04:11.106 { 00:04:11.106 "trtype": "tcp", 00:04:11.106 "method": "nvmf_get_transports", 00:04:11.106 "req_id": 1 00:04:11.106 } 00:04:11.106 Got JSON-RPC error response 00:04:11.106 response: 00:04:11.106 { 00:04:11.106 "code": -19, 00:04:11.106 "message": "No such device" 00:04:11.106 } 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.106 [2024-07-12 16:07:54.656945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.106 16:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.106 { 00:04:11.106 "subsystems": [ 00:04:11.106 { 00:04:11.106 "subsystem": "keyring", 00:04:11.107 "config": [] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "iobuf", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "iobuf_set_options", 00:04:11.107 "params": { 00:04:11.107 "small_pool_count": 8192, 00:04:11.107 "large_pool_count": 1024, 00:04:11.107 "small_bufsize": 8192, 00:04:11.107 "large_bufsize": 135168 00:04:11.107 } 00:04:11.107 } 00:04:11.107 ] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "sock", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "sock_set_default_impl", 00:04:11.107 "params": { 00:04:11.107 "impl_name": "uring" 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "sock_impl_set_options", 00:04:11.107 "params": { 00:04:11.107 "impl_name": "ssl", 00:04:11.107 "recv_buf_size": 4096, 00:04:11.107 "send_buf_size": 4096, 00:04:11.107 "enable_recv_pipe": true, 00:04:11.107 "enable_quickack": false, 00:04:11.107 "enable_placement_id": 0, 00:04:11.107 "enable_zerocopy_send_server": true, 00:04:11.107 "enable_zerocopy_send_client": false, 00:04:11.107 "zerocopy_threshold": 0, 00:04:11.107 "tls_version": 0, 00:04:11.107 "enable_ktls": false 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "sock_impl_set_options", 00:04:11.107 "params": { 00:04:11.107 "impl_name": "posix", 00:04:11.107 "recv_buf_size": 2097152, 00:04:11.107 "send_buf_size": 2097152, 00:04:11.107 "enable_recv_pipe": true, 00:04:11.107 "enable_quickack": false, 00:04:11.107 "enable_placement_id": 0, 00:04:11.107 "enable_zerocopy_send_server": true, 00:04:11.107 "enable_zerocopy_send_client": false, 00:04:11.107 "zerocopy_threshold": 0, 00:04:11.107 "tls_version": 0, 00:04:11.107 "enable_ktls": false 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "sock_impl_set_options", 00:04:11.107 "params": { 00:04:11.107 "impl_name": "uring", 00:04:11.107 "recv_buf_size": 2097152, 00:04:11.107 "send_buf_size": 2097152, 00:04:11.107 "enable_recv_pipe": true, 00:04:11.107 "enable_quickack": false, 00:04:11.107 "enable_placement_id": 0, 00:04:11.107 "enable_zerocopy_send_server": false, 00:04:11.107 "enable_zerocopy_send_client": false, 00:04:11.107 "zerocopy_threshold": 0, 00:04:11.107 "tls_version": 0, 00:04:11.107 "enable_ktls": false 00:04:11.107 } 00:04:11.107 } 00:04:11.107 ] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "vmd", 00:04:11.107 "config": [] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "accel", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "accel_set_options", 00:04:11.107 "params": { 00:04:11.107 "small_cache_size": 128, 00:04:11.107 "large_cache_size": 16, 00:04:11.107 "task_count": 2048, 00:04:11.107 "sequence_count": 2048, 00:04:11.107 "buf_count": 2048 00:04:11.107 } 00:04:11.107 } 00:04:11.107 ] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "bdev", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "bdev_set_options", 00:04:11.107 "params": { 00:04:11.107 "bdev_io_pool_size": 65535, 00:04:11.107 "bdev_io_cache_size": 256, 00:04:11.107 "bdev_auto_examine": true, 00:04:11.107 "iobuf_small_cache_size": 128, 00:04:11.107 "iobuf_large_cache_size": 16 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "bdev_raid_set_options", 00:04:11.107 "params": { 00:04:11.107 "process_window_size_kb": 1024 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "bdev_iscsi_set_options", 00:04:11.107 "params": { 00:04:11.107 "timeout_sec": 30 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "bdev_nvme_set_options", 00:04:11.107 "params": { 00:04:11.107 "action_on_timeout": "none", 00:04:11.107 "timeout_us": 0, 00:04:11.107 "timeout_admin_us": 0, 00:04:11.107 "keep_alive_timeout_ms": 10000, 00:04:11.107 "arbitration_burst": 0, 00:04:11.107 "low_priority_weight": 0, 00:04:11.107 "medium_priority_weight": 0, 00:04:11.107 "high_priority_weight": 0, 00:04:11.107 "nvme_adminq_poll_period_us": 10000, 00:04:11.107 "nvme_ioq_poll_period_us": 0, 00:04:11.107 "io_queue_requests": 0, 00:04:11.107 "delay_cmd_submit": true, 00:04:11.107 "transport_retry_count": 4, 00:04:11.107 "bdev_retry_count": 3, 00:04:11.107 "transport_ack_timeout": 0, 00:04:11.107 "ctrlr_loss_timeout_sec": 0, 00:04:11.107 "reconnect_delay_sec": 0, 00:04:11.107 "fast_io_fail_timeout_sec": 0, 00:04:11.107 "disable_auto_failback": false, 00:04:11.107 "generate_uuids": false, 00:04:11.107 "transport_tos": 0, 00:04:11.107 "nvme_error_stat": false, 00:04:11.107 "rdma_srq_size": 0, 00:04:11.107 "io_path_stat": false, 00:04:11.107 "allow_accel_sequence": false, 00:04:11.107 "rdma_max_cq_size": 0, 00:04:11.107 "rdma_cm_event_timeout_ms": 0, 00:04:11.107 "dhchap_digests": [ 00:04:11.107 "sha256", 00:04:11.107 "sha384", 00:04:11.107 "sha512" 00:04:11.107 ], 00:04:11.107 "dhchap_dhgroups": [ 00:04:11.107 "null", 00:04:11.107 "ffdhe2048", 00:04:11.107 "ffdhe3072", 00:04:11.107 "ffdhe4096", 00:04:11.107 "ffdhe6144", 00:04:11.107 "ffdhe8192" 00:04:11.107 ] 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "bdev_nvme_set_hotplug", 00:04:11.107 "params": { 00:04:11.107 "period_us": 100000, 00:04:11.107 "enable": false 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "bdev_wait_for_examine" 00:04:11.107 } 00:04:11.107 ] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "scsi", 00:04:11.107 "config": null 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "scheduler", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "framework_set_scheduler", 00:04:11.107 "params": { 00:04:11.107 "name": "static" 00:04:11.107 } 00:04:11.107 } 00:04:11.107 ] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "vhost_scsi", 00:04:11.107 "config": [] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "vhost_blk", 00:04:11.107 "config": [] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "ublk", 00:04:11.107 "config": [] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "nbd", 00:04:11.107 "config": [] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "nvmf", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "nvmf_set_config", 00:04:11.107 "params": { 00:04:11.107 "discovery_filter": "match_any", 00:04:11.107 "admin_cmd_passthru": { 00:04:11.107 "identify_ctrlr": false 00:04:11.107 } 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "nvmf_set_max_subsystems", 00:04:11.107 "params": { 00:04:11.107 "max_subsystems": 1024 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "nvmf_set_crdt", 00:04:11.107 "params": { 00:04:11.107 "crdt1": 0, 00:04:11.107 "crdt2": 0, 00:04:11.107 "crdt3": 0 00:04:11.107 } 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "method": "nvmf_create_transport", 00:04:11.107 "params": { 00:04:11.107 "trtype": "TCP", 00:04:11.107 "max_queue_depth": 128, 00:04:11.107 "max_io_qpairs_per_ctrlr": 127, 00:04:11.107 "in_capsule_data_size": 4096, 00:04:11.107 "max_io_size": 131072, 00:04:11.107 "io_unit_size": 131072, 00:04:11.107 "max_aq_depth": 128, 00:04:11.107 "num_shared_buffers": 511, 00:04:11.107 "buf_cache_size": 4294967295, 00:04:11.107 "dif_insert_or_strip": false, 00:04:11.107 "zcopy": false, 00:04:11.107 "c2h_success": true, 00:04:11.107 "sock_priority": 0, 00:04:11.107 "abort_timeout_sec": 1, 00:04:11.107 "ack_timeout": 0, 00:04:11.107 "data_wr_pool_size": 0 00:04:11.107 } 00:04:11.107 } 00:04:11.107 ] 00:04:11.107 }, 00:04:11.107 { 00:04:11.107 "subsystem": "iscsi", 00:04:11.107 "config": [ 00:04:11.107 { 00:04:11.107 "method": "iscsi_set_options", 00:04:11.107 "params": { 00:04:11.107 "node_base": "iqn.2016-06.io.spdk", 00:04:11.107 "max_sessions": 128, 00:04:11.107 "max_connections_per_session": 2, 00:04:11.107 "max_queue_depth": 64, 00:04:11.107 "default_time2wait": 2, 00:04:11.107 "default_time2retain": 20, 00:04:11.107 "first_burst_length": 8192, 00:04:11.107 "immediate_data": true, 00:04:11.107 "allow_duplicated_isid": false, 00:04:11.107 "error_recovery_level": 0, 00:04:11.107 "nop_timeout": 60, 00:04:11.107 "nop_in_interval": 30, 00:04:11.107 "disable_chap": false, 00:04:11.107 "require_chap": false, 00:04:11.107 "mutual_chap": false, 00:04:11.107 "chap_group": 0, 00:04:11.107 "max_large_datain_per_connection": 64, 00:04:11.107 "max_r2t_per_connection": 4, 00:04:11.107 "pdu_pool_size": 36864, 00:04:11.107 "immediate_data_pool_size": 16384, 00:04:11.107 "data_out_pool_size": 2048 00:04:11.107 } 00:04:11.107 } 00:04:11.107 ] 00:04:11.108 } 00:04:11.108 ] 00:04:11.108 } 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59014 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59014 ']' 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59014 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59014 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.367 killing process with pid 59014 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59014' 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59014 00:04:11.367 16:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59014 00:04:11.626 16:07:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59036 00:04:11.626 16:07:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.626 16:07:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59036 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59036 ']' 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59036 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59036 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59036' 00:04:16.900 killing process with pid 59036 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59036 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59036 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.900 00:04:16.900 real 0m6.806s 00:04:16.900 user 0m6.744s 00:04:16.900 sys 0m0.445s 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.900 ************************************ 00:04:16.900 END TEST skip_rpc_with_json 00:04:16.900 ************************************ 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.900 16:08:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.900 16:08:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:16.900 16:08:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.900 16:08:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.900 16:08:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.900 ************************************ 00:04:16.900 START TEST skip_rpc_with_delay 00:04:16.900 ************************************ 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.900 [2024-07-12 16:08:00.519631] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:16.900 [2024-07-12 16:08:00.519766] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:16.900 ************************************ 00:04:16.900 END TEST skip_rpc_with_delay 00:04:16.900 ************************************ 00:04:16.900 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:16.900 00:04:16.900 real 0m0.089s 00:04:16.900 user 0m0.053s 00:04:16.901 sys 0m0.035s 00:04:16.901 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.901 16:08:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:16.901 16:08:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.901 16:08:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:16.901 16:08:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:16.901 16:08:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:16.901 16:08:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.901 16:08:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.901 16:08:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.901 ************************************ 00:04:16.901 START TEST exit_on_failed_rpc_init 00:04:16.901 ************************************ 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59151 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59151 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59151 ']' 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:16.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:16.901 16:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.160 [2024-07-12 16:08:00.664945] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:17.160 [2024-07-12 16:08:00.665660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59151 ] 00:04:17.160 [2024-07-12 16:08:00.805141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.160 [2024-07-12 16:08:00.876574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.419 [2024-07-12 16:08:00.911824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:17.985 16:08:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.243 [2024-07-12 16:08:01.732226] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:18.243 [2024-07-12 16:08:01.732538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:04:18.243 [2024-07-12 16:08:01.872430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.243 [2024-07-12 16:08:01.939560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.243 [2024-07-12 16:08:01.939659] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:18.243 [2024-07-12 16:08:01.939691] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:18.243 [2024-07-12 16:08:01.939702] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59151 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59151 ']' 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59151 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59151 00:04:18.501 killing process with pid 59151 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59151' 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59151 00:04:18.501 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59151 00:04:18.759 ************************************ 00:04:18.759 END TEST exit_on_failed_rpc_init 00:04:18.759 ************************************ 00:04:18.759 00:04:18.759 real 0m1.678s 00:04:18.759 user 0m2.085s 00:04:18.759 sys 0m0.312s 00:04:18.759 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.759 16:08:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.759 16:08:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.759 16:08:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.759 00:04:18.759 real 0m14.147s 00:04:18.759 user 0m14.015s 00:04:18.759 sys 0m1.114s 00:04:18.759 16:08:02 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.759 16:08:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.759 ************************************ 00:04:18.759 END TEST skip_rpc 00:04:18.759 ************************************ 00:04:18.759 16:08:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.759 16:08:02 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:18.759 16:08:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.759 16:08:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.759 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:04:18.759 ************************************ 00:04:18.759 START TEST rpc_client 00:04:18.759 ************************************ 00:04:18.759 16:08:02 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:18.759 * Looking for test storage... 00:04:18.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:18.759 16:08:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:18.759 OK 00:04:18.759 16:08:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:18.759 00:04:18.759 real 0m0.097s 00:04:18.759 user 0m0.042s 00:04:18.759 sys 0m0.062s 00:04:18.759 16:08:02 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.759 ************************************ 00:04:18.759 END TEST rpc_client 00:04:18.759 16:08:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:18.759 ************************************ 00:04:19.017 16:08:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.017 16:08:02 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:19.017 16:08:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.017 16:08:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.017 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 ************************************ 00:04:19.017 START TEST json_config 00:04:19.017 ************************************ 00:04:19.017 16:08:02 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:19.017 16:08:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.017 16:08:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:19.017 16:08:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.018 16:08:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.018 16:08:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.018 16:08:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.018 16:08:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.018 16:08:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.018 16:08:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.018 16:08:02 json_config -- paths/export.sh@5 -- # export PATH 00:04:19.018 16:08:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@47 -- # : 0 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:19.018 16:08:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.018 INFO: JSON configuration test init 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.018 16:08:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:19.018 16:08:02 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.018 16:08:02 json_config -- json_config/common.sh@10 -- # shift 00:04:19.018 16:08:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.018 16:08:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.018 16:08:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.018 16:08:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.018 16:08:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.018 16:08:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59287 00:04:19.018 Waiting for target to run... 00:04:19.018 16:08:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.018 16:08:02 json_config -- json_config/common.sh@25 -- # waitforlisten 59287 /var/tmp/spdk_tgt.sock 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 59287 ']' 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.018 16:08:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.018 16:08:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.018 [2024-07-12 16:08:02.670196] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:19.018 [2024-07-12 16:08:02.670299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:04:19.276 [2024-07-12 16:08:02.975591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.534 [2024-07-12 16:08:03.030879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.099 16:08:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.099 16:08:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:20.100 00:04:20.100 16:08:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.100 16:08:03 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:20.100 16:08:03 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:20.100 16:08:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.100 16:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.100 16:08:03 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:20.100 16:08:03 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:20.100 16:08:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:20.100 16:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.100 16:08:03 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:20.100 16:08:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:20.100 16:08:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:20.358 [2024-07-12 16:08:03.947305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:20.626 16:08:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.626 16:08:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:20.626 16:08:04 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:20.626 16:08:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:20.931 16:08:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:20.931 16:08:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:20.931 16:08:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.931 16:08:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:20.931 16:08:04 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:20.931 16:08:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.190 MallocForNvmf0 00:04:21.190 16:08:04 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.190 16:08:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.448 MallocForNvmf1 00:04:21.448 16:08:04 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:21.448 16:08:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:21.706 [2024-07-12 16:08:05.208372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.706 16:08:05 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:21.706 16:08:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:21.963 16:08:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:21.963 16:08:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.222 16:08:05 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.222 16:08:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.482 16:08:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.482 16:08:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.482 [2024-07-12 16:08:06.180804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.482 16:08:06 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:22.482 16:08:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.482 16:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.740 16:08:06 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:22.740 16:08:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.740 16:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.740 16:08:06 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:22.740 16:08:06 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:22.740 16:08:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:22.999 MallocBdevForConfigChangeCheck 00:04:22.999 16:08:06 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:22.999 16:08:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.999 16:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.999 16:08:06 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:22.999 16:08:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.258 INFO: shutting down applications... 00:04:23.258 16:08:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:23.258 16:08:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:23.258 16:08:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:23.258 16:08:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:23.258 16:08:06 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:23.826 Calling clear_iscsi_subsystem 00:04:23.826 Calling clear_nvmf_subsystem 00:04:23.826 Calling clear_nbd_subsystem 00:04:23.826 Calling clear_ublk_subsystem 00:04:23.826 Calling clear_vhost_blk_subsystem 00:04:23.826 Calling clear_vhost_scsi_subsystem 00:04:23.826 Calling clear_bdev_subsystem 00:04:23.826 16:08:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:23.826 16:08:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:23.826 16:08:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:23.826 16:08:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.826 16:08:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:23.826 16:08:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:24.085 16:08:07 json_config -- json_config/json_config.sh@345 -- # break 00:04:24.085 16:08:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:24.085 16:08:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:24.085 16:08:07 json_config -- json_config/common.sh@31 -- # local app=target 00:04:24.085 16:08:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.085 16:08:07 json_config -- json_config/common.sh@35 -- # [[ -n 59287 ]] 00:04:24.085 16:08:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59287 00:04:24.085 16:08:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.085 16:08:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.085 16:08:07 json_config -- json_config/common.sh@41 -- # kill -0 59287 00:04:24.085 16:08:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.654 16:08:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.654 16:08:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.654 16:08:08 json_config -- json_config/common.sh@41 -- # kill -0 59287 00:04:24.654 16:08:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.654 16:08:08 json_config -- json_config/common.sh@43 -- # break 00:04:24.654 16:08:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.654 SPDK target shutdown done 00:04:24.654 16:08:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.654 INFO: relaunching applications... 00:04:24.654 16:08:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:24.654 16:08:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.654 16:08:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.654 16:08:08 json_config -- json_config/common.sh@10 -- # shift 00:04:24.654 16:08:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.654 16:08:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.654 16:08:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.654 16:08:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.654 16:08:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.654 16:08:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59472 00:04:24.654 16:08:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.654 Waiting for target to run... 00:04:24.654 16:08:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.654 16:08:08 json_config -- json_config/common.sh@25 -- # waitforlisten 59472 /var/tmp/spdk_tgt.sock 00:04:24.654 16:08:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 59472 ']' 00:04:24.654 16:08:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.654 16:08:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.655 16:08:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.655 16:08:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.655 16:08:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.655 [2024-07-12 16:08:08.264005] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:24.655 [2024-07-12 16:08:08.264096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59472 ] 00:04:24.913 [2024-07-12 16:08:08.537443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.913 [2024-07-12 16:08:08.575715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.172 [2024-07-12 16:08:08.701613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.172 [2024-07-12 16:08:08.886945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.432 [2024-07-12 16:08:08.919053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.690 16:08:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.690 16:08:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:25.690 00:04:25.690 16:08:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.690 16:08:09 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:25.690 INFO: Checking if target configuration is the same... 00:04:25.690 16:08:09 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:25.690 16:08:09 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.690 16:08:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:25.690 16:08:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.690 + '[' 2 -ne 2 ']' 00:04:25.690 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:25.690 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:25.690 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:25.690 +++ basename /dev/fd/62 00:04:25.690 ++ mktemp /tmp/62.XXX 00:04:25.690 + tmp_file_1=/tmp/62.VoB 00:04:25.690 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.690 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.690 + tmp_file_2=/tmp/spdk_tgt_config.json.9Fa 00:04:25.690 + ret=0 00:04:25.690 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:25.949 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:25.949 + diff -u /tmp/62.VoB /tmp/spdk_tgt_config.json.9Fa 00:04:25.949 INFO: JSON config files are the same 00:04:25.949 + echo 'INFO: JSON config files are the same' 00:04:25.949 + rm /tmp/62.VoB /tmp/spdk_tgt_config.json.9Fa 00:04:25.949 + exit 0 00:04:25.949 16:08:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:25.949 INFO: changing configuration and checking if this can be detected... 00:04:25.949 16:08:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:25.949 16:08:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.949 16:08:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.515 16:08:09 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.515 16:08:09 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:26.515 16:08:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.515 + '[' 2 -ne 2 ']' 00:04:26.515 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:26.515 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:26.515 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:26.515 +++ basename /dev/fd/62 00:04:26.515 ++ mktemp /tmp/62.XXX 00:04:26.515 + tmp_file_1=/tmp/62.bhi 00:04:26.515 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.515 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.515 + tmp_file_2=/tmp/spdk_tgt_config.json.yYJ 00:04:26.515 + ret=0 00:04:26.515 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:26.773 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:26.773 + diff -u /tmp/62.bhi /tmp/spdk_tgt_config.json.yYJ 00:04:26.773 + ret=1 00:04:26.773 + echo '=== Start of file: /tmp/62.bhi ===' 00:04:26.773 + cat /tmp/62.bhi 00:04:26.773 + echo '=== End of file: /tmp/62.bhi ===' 00:04:26.773 + echo '' 00:04:26.773 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yYJ ===' 00:04:26.773 + cat /tmp/spdk_tgt_config.json.yYJ 00:04:26.773 + echo '=== End of file: /tmp/spdk_tgt_config.json.yYJ ===' 00:04:26.773 + echo '' 00:04:26.773 + rm /tmp/62.bhi /tmp/spdk_tgt_config.json.yYJ 00:04:26.773 + exit 1 00:04:26.773 INFO: configuration change detected. 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 59472 ]] 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.773 16:08:10 json_config -- json_config/json_config.sh@323 -- # killprocess 59472 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@948 -- # '[' -z 59472 ']' 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@952 -- # kill -0 59472 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@953 -- # uname 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.773 16:08:10 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59472 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.031 killing process with pid 59472 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59472' 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@967 -- # kill 59472 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@972 -- # wait 59472 00:04:27.031 16:08:10 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:27.031 16:08:10 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.031 16:08:10 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:27.031 INFO: Success 00:04:27.031 16:08:10 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:27.031 00:04:27.031 real 0m8.214s 00:04:27.031 user 0m12.068s 00:04:27.031 sys 0m1.357s 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.031 16:08:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.031 ************************************ 00:04:27.032 END TEST json_config 00:04:27.032 ************************************ 00:04:27.293 16:08:10 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.293 16:08:10 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.293 16:08:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.293 16:08:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.293 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:27.293 ************************************ 00:04:27.293 START TEST json_config_extra_key 00:04:27.293 ************************************ 00:04:27.293 16:08:10 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.293 16:08:10 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.293 16:08:10 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.293 16:08:10 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.293 16:08:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.293 16:08:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.293 16:08:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.293 16:08:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:27.293 16:08:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:27.293 16:08:10 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.293 INFO: launching applications... 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:27.293 16:08:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.293 16:08:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.294 16:08:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59618 00:04:27.294 Waiting for target to run... 00:04:27.294 16:08:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.294 16:08:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59618 /var/tmp/spdk_tgt.sock 00:04:27.294 16:08:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.294 16:08:10 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59618 ']' 00:04:27.294 16:08:10 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.294 16:08:10 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.294 16:08:10 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.294 16:08:10 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.294 16:08:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.294 [2024-07-12 16:08:10.920275] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:27.294 [2024-07-12 16:08:10.920387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59618 ] 00:04:27.552 [2024-07-12 16:08:11.244660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.811 [2024-07-12 16:08:11.302938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.811 [2024-07-12 16:08:11.325275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:28.379 00:04:28.379 INFO: shutting down applications... 00:04:28.379 16:08:11 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.379 16:08:11 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.379 16:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.379 16:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59618 ]] 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59618 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59618 00:04:28.379 16:08:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59618 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:28.945 16:08:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:28.945 SPDK target shutdown done 00:04:28.945 16:08:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:28.945 Success 00:04:28.945 00:04:28.945 real 0m1.622s 00:04:28.945 user 0m1.453s 00:04:28.945 sys 0m0.320s 00:04:28.945 16:08:12 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.946 ************************************ 00:04:28.946 16:08:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.946 END TEST json_config_extra_key 00:04:28.946 ************************************ 00:04:28.946 16:08:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.946 16:08:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:28.946 16:08:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.946 16:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.946 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:04:28.946 ************************************ 00:04:28.946 START TEST alias_rpc 00:04:28.946 ************************************ 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:28.946 * Looking for test storage... 00:04:28.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:28.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.946 16:08:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:28.946 16:08:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59677 00:04:28.946 16:08:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59677 00:04:28.946 16:08:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59677 ']' 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.946 16:08:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.946 [2024-07-12 16:08:12.599419] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:28.946 [2024-07-12 16:08:12.599514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:04:29.204 [2024-07-12 16:08:12.725670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.204 [2024-07-12 16:08:12.773681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.204 [2024-07-12 16:08:12.799913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:29.204 16:08:12 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.204 16:08:12 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:29.204 16:08:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:29.770 16:08:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59677 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59677 ']' 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59677 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59677 00:04:29.770 killing process with pid 59677 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59677' 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@967 -- # kill 59677 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@972 -- # wait 59677 00:04:29.770 ************************************ 00:04:29.770 END TEST alias_rpc 00:04:29.770 ************************************ 00:04:29.770 00:04:29.770 real 0m1.034s 00:04:29.770 user 0m1.224s 00:04:29.770 sys 0m0.272s 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.770 16:08:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.029 16:08:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.029 16:08:13 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:30.029 16:08:13 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:30.029 16:08:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.029 16:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.029 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:30.029 ************************************ 00:04:30.029 START TEST spdkcli_tcp 00:04:30.029 ************************************ 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:30.029 * Looking for test storage... 00:04:30.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59740 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:30.029 16:08:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59740 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59740 ']' 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.029 16:08:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.030 16:08:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.030 [2024-07-12 16:08:13.698907] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:30.030 [2024-07-12 16:08:13.699028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59740 ] 00:04:30.289 [2024-07-12 16:08:13.838948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.289 [2024-07-12 16:08:13.902165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.289 [2024-07-12 16:08:13.902176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.289 [2024-07-12 16:08:13.931944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:31.225 16:08:14 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.225 16:08:14 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:31.225 16:08:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59759 00:04:31.225 16:08:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:31.225 16:08:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.225 [ 00:04:31.225 "bdev_malloc_delete", 00:04:31.225 "bdev_malloc_create", 00:04:31.225 "bdev_null_resize", 00:04:31.225 "bdev_null_delete", 00:04:31.225 "bdev_null_create", 00:04:31.225 "bdev_nvme_cuse_unregister", 00:04:31.225 "bdev_nvme_cuse_register", 00:04:31.225 "bdev_opal_new_user", 00:04:31.225 "bdev_opal_set_lock_state", 00:04:31.225 "bdev_opal_delete", 00:04:31.225 "bdev_opal_get_info", 00:04:31.225 "bdev_opal_create", 00:04:31.225 "bdev_nvme_opal_revert", 00:04:31.225 "bdev_nvme_opal_init", 00:04:31.225 "bdev_nvme_send_cmd", 00:04:31.225 "bdev_nvme_get_path_iostat", 00:04:31.225 "bdev_nvme_get_mdns_discovery_info", 00:04:31.225 "bdev_nvme_stop_mdns_discovery", 00:04:31.225 "bdev_nvme_start_mdns_discovery", 00:04:31.225 "bdev_nvme_set_multipath_policy", 00:04:31.225 "bdev_nvme_set_preferred_path", 00:04:31.225 "bdev_nvme_get_io_paths", 00:04:31.226 "bdev_nvme_remove_error_injection", 00:04:31.226 "bdev_nvme_add_error_injection", 00:04:31.226 "bdev_nvme_get_discovery_info", 00:04:31.226 "bdev_nvme_stop_discovery", 00:04:31.226 "bdev_nvme_start_discovery", 00:04:31.226 "bdev_nvme_get_controller_health_info", 00:04:31.226 "bdev_nvme_disable_controller", 00:04:31.226 "bdev_nvme_enable_controller", 00:04:31.226 "bdev_nvme_reset_controller", 00:04:31.226 "bdev_nvme_get_transport_statistics", 00:04:31.226 "bdev_nvme_apply_firmware", 00:04:31.226 "bdev_nvme_detach_controller", 00:04:31.226 "bdev_nvme_get_controllers", 00:04:31.226 "bdev_nvme_attach_controller", 00:04:31.226 "bdev_nvme_set_hotplug", 00:04:31.226 "bdev_nvme_set_options", 00:04:31.226 "bdev_passthru_delete", 00:04:31.226 "bdev_passthru_create", 00:04:31.226 "bdev_lvol_set_parent_bdev", 00:04:31.226 "bdev_lvol_set_parent", 00:04:31.226 "bdev_lvol_check_shallow_copy", 00:04:31.226 "bdev_lvol_start_shallow_copy", 00:04:31.226 "bdev_lvol_grow_lvstore", 00:04:31.226 "bdev_lvol_get_lvols", 00:04:31.226 "bdev_lvol_get_lvstores", 00:04:31.226 "bdev_lvol_delete", 00:04:31.226 "bdev_lvol_set_read_only", 00:04:31.226 "bdev_lvol_resize", 00:04:31.226 "bdev_lvol_decouple_parent", 00:04:31.226 "bdev_lvol_inflate", 00:04:31.226 "bdev_lvol_rename", 00:04:31.226 "bdev_lvol_clone_bdev", 00:04:31.226 "bdev_lvol_clone", 00:04:31.226 "bdev_lvol_snapshot", 00:04:31.226 "bdev_lvol_create", 00:04:31.226 "bdev_lvol_delete_lvstore", 00:04:31.226 "bdev_lvol_rename_lvstore", 00:04:31.226 "bdev_lvol_create_lvstore", 00:04:31.226 "bdev_raid_set_options", 00:04:31.226 "bdev_raid_remove_base_bdev", 00:04:31.226 "bdev_raid_add_base_bdev", 00:04:31.226 "bdev_raid_delete", 00:04:31.226 "bdev_raid_create", 00:04:31.226 "bdev_raid_get_bdevs", 00:04:31.226 "bdev_error_inject_error", 00:04:31.226 "bdev_error_delete", 00:04:31.226 "bdev_error_create", 00:04:31.226 "bdev_split_delete", 00:04:31.226 "bdev_split_create", 00:04:31.226 "bdev_delay_delete", 00:04:31.226 "bdev_delay_create", 00:04:31.226 "bdev_delay_update_latency", 00:04:31.226 "bdev_zone_block_delete", 00:04:31.226 "bdev_zone_block_create", 00:04:31.226 "blobfs_create", 00:04:31.226 "blobfs_detect", 00:04:31.226 "blobfs_set_cache_size", 00:04:31.226 "bdev_aio_delete", 00:04:31.226 "bdev_aio_rescan", 00:04:31.226 "bdev_aio_create", 00:04:31.226 "bdev_ftl_set_property", 00:04:31.226 "bdev_ftl_get_properties", 00:04:31.226 "bdev_ftl_get_stats", 00:04:31.226 "bdev_ftl_unmap", 00:04:31.226 "bdev_ftl_unload", 00:04:31.226 "bdev_ftl_delete", 00:04:31.226 "bdev_ftl_load", 00:04:31.226 "bdev_ftl_create", 00:04:31.226 "bdev_virtio_attach_controller", 00:04:31.226 "bdev_virtio_scsi_get_devices", 00:04:31.226 "bdev_virtio_detach_controller", 00:04:31.226 "bdev_virtio_blk_set_hotplug", 00:04:31.226 "bdev_iscsi_delete", 00:04:31.226 "bdev_iscsi_create", 00:04:31.226 "bdev_iscsi_set_options", 00:04:31.226 "bdev_uring_delete", 00:04:31.226 "bdev_uring_rescan", 00:04:31.226 "bdev_uring_create", 00:04:31.226 "accel_error_inject_error", 00:04:31.226 "ioat_scan_accel_module", 00:04:31.226 "dsa_scan_accel_module", 00:04:31.226 "iaa_scan_accel_module", 00:04:31.226 "keyring_file_remove_key", 00:04:31.226 "keyring_file_add_key", 00:04:31.226 "keyring_linux_set_options", 00:04:31.226 "iscsi_get_histogram", 00:04:31.226 "iscsi_enable_histogram", 00:04:31.226 "iscsi_set_options", 00:04:31.226 "iscsi_get_auth_groups", 00:04:31.226 "iscsi_auth_group_remove_secret", 00:04:31.226 "iscsi_auth_group_add_secret", 00:04:31.226 "iscsi_delete_auth_group", 00:04:31.226 "iscsi_create_auth_group", 00:04:31.226 "iscsi_set_discovery_auth", 00:04:31.226 "iscsi_get_options", 00:04:31.226 "iscsi_target_node_request_logout", 00:04:31.226 "iscsi_target_node_set_redirect", 00:04:31.226 "iscsi_target_node_set_auth", 00:04:31.226 "iscsi_target_node_add_lun", 00:04:31.226 "iscsi_get_stats", 00:04:31.226 "iscsi_get_connections", 00:04:31.226 "iscsi_portal_group_set_auth", 00:04:31.226 "iscsi_start_portal_group", 00:04:31.226 "iscsi_delete_portal_group", 00:04:31.226 "iscsi_create_portal_group", 00:04:31.226 "iscsi_get_portal_groups", 00:04:31.226 "iscsi_delete_target_node", 00:04:31.226 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.226 "iscsi_target_node_add_pg_ig_maps", 00:04:31.226 "iscsi_create_target_node", 00:04:31.226 "iscsi_get_target_nodes", 00:04:31.226 "iscsi_delete_initiator_group", 00:04:31.226 "iscsi_initiator_group_remove_initiators", 00:04:31.226 "iscsi_initiator_group_add_initiators", 00:04:31.226 "iscsi_create_initiator_group", 00:04:31.226 "iscsi_get_initiator_groups", 00:04:31.226 "nvmf_set_crdt", 00:04:31.226 "nvmf_set_config", 00:04:31.226 "nvmf_set_max_subsystems", 00:04:31.226 "nvmf_stop_mdns_prr", 00:04:31.226 "nvmf_publish_mdns_prr", 00:04:31.226 "nvmf_subsystem_get_listeners", 00:04:31.226 "nvmf_subsystem_get_qpairs", 00:04:31.226 "nvmf_subsystem_get_controllers", 00:04:31.226 "nvmf_get_stats", 00:04:31.226 "nvmf_get_transports", 00:04:31.226 "nvmf_create_transport", 00:04:31.226 "nvmf_get_targets", 00:04:31.226 "nvmf_delete_target", 00:04:31.226 "nvmf_create_target", 00:04:31.226 "nvmf_subsystem_allow_any_host", 00:04:31.226 "nvmf_subsystem_remove_host", 00:04:31.226 "nvmf_subsystem_add_host", 00:04:31.226 "nvmf_ns_remove_host", 00:04:31.226 "nvmf_ns_add_host", 00:04:31.226 "nvmf_subsystem_remove_ns", 00:04:31.226 "nvmf_subsystem_add_ns", 00:04:31.226 "nvmf_subsystem_listener_set_ana_state", 00:04:31.226 "nvmf_discovery_get_referrals", 00:04:31.226 "nvmf_discovery_remove_referral", 00:04:31.226 "nvmf_discovery_add_referral", 00:04:31.226 "nvmf_subsystem_remove_listener", 00:04:31.226 "nvmf_subsystem_add_listener", 00:04:31.226 "nvmf_delete_subsystem", 00:04:31.226 "nvmf_create_subsystem", 00:04:31.226 "nvmf_get_subsystems", 00:04:31.226 "env_dpdk_get_mem_stats", 00:04:31.226 "nbd_get_disks", 00:04:31.226 "nbd_stop_disk", 00:04:31.226 "nbd_start_disk", 00:04:31.226 "ublk_recover_disk", 00:04:31.226 "ublk_get_disks", 00:04:31.226 "ublk_stop_disk", 00:04:31.226 "ublk_start_disk", 00:04:31.226 "ublk_destroy_target", 00:04:31.226 "ublk_create_target", 00:04:31.226 "virtio_blk_create_transport", 00:04:31.226 "virtio_blk_get_transports", 00:04:31.226 "vhost_controller_set_coalescing", 00:04:31.226 "vhost_get_controllers", 00:04:31.226 "vhost_delete_controller", 00:04:31.226 "vhost_create_blk_controller", 00:04:31.226 "vhost_scsi_controller_remove_target", 00:04:31.226 "vhost_scsi_controller_add_target", 00:04:31.226 "vhost_start_scsi_controller", 00:04:31.226 "vhost_create_scsi_controller", 00:04:31.226 "thread_set_cpumask", 00:04:31.226 "framework_get_governor", 00:04:31.226 "framework_get_scheduler", 00:04:31.226 "framework_set_scheduler", 00:04:31.226 "framework_get_reactors", 00:04:31.226 "thread_get_io_channels", 00:04:31.226 "thread_get_pollers", 00:04:31.226 "thread_get_stats", 00:04:31.226 "framework_monitor_context_switch", 00:04:31.226 "spdk_kill_instance", 00:04:31.226 "log_enable_timestamps", 00:04:31.226 "log_get_flags", 00:04:31.226 "log_clear_flag", 00:04:31.226 "log_set_flag", 00:04:31.226 "log_get_level", 00:04:31.226 "log_set_level", 00:04:31.226 "log_get_print_level", 00:04:31.226 "log_set_print_level", 00:04:31.226 "framework_enable_cpumask_locks", 00:04:31.226 "framework_disable_cpumask_locks", 00:04:31.226 "framework_wait_init", 00:04:31.226 "framework_start_init", 00:04:31.226 "scsi_get_devices", 00:04:31.226 "bdev_get_histogram", 00:04:31.226 "bdev_enable_histogram", 00:04:31.226 "bdev_set_qos_limit", 00:04:31.226 "bdev_set_qd_sampling_period", 00:04:31.226 "bdev_get_bdevs", 00:04:31.226 "bdev_reset_iostat", 00:04:31.226 "bdev_get_iostat", 00:04:31.226 "bdev_examine", 00:04:31.226 "bdev_wait_for_examine", 00:04:31.226 "bdev_set_options", 00:04:31.226 "notify_get_notifications", 00:04:31.226 "notify_get_types", 00:04:31.226 "accel_get_stats", 00:04:31.226 "accel_set_options", 00:04:31.226 "accel_set_driver", 00:04:31.226 "accel_crypto_key_destroy", 00:04:31.226 "accel_crypto_keys_get", 00:04:31.226 "accel_crypto_key_create", 00:04:31.226 "accel_assign_opc", 00:04:31.226 "accel_get_module_info", 00:04:31.226 "accel_get_opc_assignments", 00:04:31.226 "vmd_rescan", 00:04:31.226 "vmd_remove_device", 00:04:31.226 "vmd_enable", 00:04:31.226 "sock_get_default_impl", 00:04:31.226 "sock_set_default_impl", 00:04:31.226 "sock_impl_set_options", 00:04:31.226 "sock_impl_get_options", 00:04:31.226 "iobuf_get_stats", 00:04:31.226 "iobuf_set_options", 00:04:31.226 "framework_get_pci_devices", 00:04:31.226 "framework_get_config", 00:04:31.226 "framework_get_subsystems", 00:04:31.226 "trace_get_info", 00:04:31.226 "trace_get_tpoint_group_mask", 00:04:31.226 "trace_disable_tpoint_group", 00:04:31.226 "trace_enable_tpoint_group", 00:04:31.226 "trace_clear_tpoint_mask", 00:04:31.226 "trace_set_tpoint_mask", 00:04:31.226 "keyring_get_keys", 00:04:31.226 "spdk_get_version", 00:04:31.226 "rpc_get_methods" 00:04:31.226 ] 00:04:31.226 16:08:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.226 16:08:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.226 16:08:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59740 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59740 ']' 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59740 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59740 00:04:31.226 killing process with pid 59740 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59740' 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59740 00:04:31.226 16:08:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59740 00:04:31.485 ************************************ 00:04:31.485 END TEST spdkcli_tcp 00:04:31.485 ************************************ 00:04:31.485 00:04:31.485 real 0m1.643s 00:04:31.485 user 0m3.174s 00:04:31.485 sys 0m0.356s 00:04:31.485 16:08:15 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.485 16:08:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.744 16:08:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.744 16:08:15 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.744 16:08:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.744 16:08:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.744 16:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:31.744 ************************************ 00:04:31.744 START TEST dpdk_mem_utility 00:04:31.744 ************************************ 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.744 * Looking for test storage... 00:04:31.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:31.744 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:31.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.744 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59831 00:04:31.744 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.744 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59831 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59831 ']' 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.744 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.744 [2024-07-12 16:08:15.364147] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:31.744 [2024-07-12 16:08:15.364504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59831 ] 00:04:32.004 [2024-07-12 16:08:15.494742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.004 [2024-07-12 16:08:15.551993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.004 [2024-07-12 16:08:15.580513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:32.004 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.004 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:32.004 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:32.004 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:32.004 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.004 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.004 { 00:04:32.004 "filename": "/tmp/spdk_mem_dump.txt" 00:04:32.004 } 00:04:32.004 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.004 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:32.263 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:32.263 1 heaps totaling size 814.000000 MiB 00:04:32.263 size: 814.000000 MiB heap id: 0 00:04:32.263 end heaps---------- 00:04:32.263 8 mempools totaling size 598.116089 MiB 00:04:32.263 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:32.263 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:32.263 size: 84.521057 MiB name: bdev_io_59831 00:04:32.263 size: 51.011292 MiB name: evtpool_59831 00:04:32.263 size: 50.003479 MiB name: msgpool_59831 00:04:32.263 size: 21.763794 MiB name: PDU_Pool 00:04:32.263 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:32.263 size: 0.026123 MiB name: Session_Pool 00:04:32.263 end mempools------- 00:04:32.263 6 memzones totaling size 4.142822 MiB 00:04:32.263 size: 1.000366 MiB name: RG_ring_0_59831 00:04:32.263 size: 1.000366 MiB name: RG_ring_1_59831 00:04:32.263 size: 1.000366 MiB name: RG_ring_4_59831 00:04:32.263 size: 1.000366 MiB name: RG_ring_5_59831 00:04:32.263 size: 0.125366 MiB name: RG_ring_2_59831 00:04:32.263 size: 0.015991 MiB name: RG_ring_3_59831 00:04:32.263 end memzones------- 00:04:32.263 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:32.263 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:04:32.263 list of free elements. size: 12.472290 MiB 00:04:32.263 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:32.263 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:32.263 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:32.263 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:32.263 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:32.263 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:32.263 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:32.263 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:32.263 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:32.263 element at address: 0x20001aa00000 with size: 0.568970 MiB 00:04:32.263 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:32.263 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:32.263 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:32.263 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:32.263 element at address: 0x200003a00000 with size: 0.348572 MiB 00:04:32.263 list of standard malloc elements. size: 199.265137 MiB 00:04:32.263 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:32.263 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:32.263 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:32.263 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:32.263 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:32.263 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:32.263 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:32.263 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:32.263 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:32.263 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:32.263 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:32.263 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:32.264 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:32.264 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:32.264 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:32.264 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:32.264 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:32.264 list of memzone associated elements. size: 602.262573 MiB 00:04:32.264 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:32.264 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:32.264 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:32.264 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:32.264 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:32.264 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59831_0 00:04:32.264 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:32.264 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59831_0 00:04:32.264 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:32.264 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59831_0 00:04:32.264 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:32.264 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:32.264 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:32.264 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:32.264 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:32.264 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59831 00:04:32.264 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:32.264 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59831 00:04:32.264 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:32.264 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59831 00:04:32.264 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:32.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:32.264 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:32.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:32.264 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:32.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:32.264 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:32.264 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:32.264 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:32.264 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59831 00:04:32.264 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:32.264 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59831 00:04:32.264 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:32.264 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59831 00:04:32.264 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:32.264 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59831 00:04:32.264 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:32.264 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59831 00:04:32.264 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:32.264 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:32.264 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:32.264 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:32.264 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:32.264 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:32.264 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:32.264 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59831 00:04:32.264 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:32.264 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:32.264 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:32.264 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:32.264 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:32.264 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59831 00:04:32.264 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:32.264 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:32.264 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:32.264 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59831 00:04:32.264 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:32.264 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59831 00:04:32.264 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:32.264 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:32.264 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:32.264 16:08:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59831 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59831 ']' 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59831 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59831 00:04:32.264 killing process with pid 59831 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59831' 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59831 00:04:32.264 16:08:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59831 00:04:32.521 ************************************ 00:04:32.521 END TEST dpdk_mem_utility 00:04:32.521 ************************************ 00:04:32.521 00:04:32.521 real 0m0.913s 00:04:32.521 user 0m1.003s 00:04:32.521 sys 0m0.274s 00:04:32.521 16:08:16 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.521 16:08:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.521 16:08:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.521 16:08:16 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:32.521 16:08:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.521 16:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.521 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:32.521 ************************************ 00:04:32.521 START TEST event 00:04:32.521 ************************************ 00:04:32.521 16:08:16 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:32.780 * Looking for test storage... 00:04:32.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:32.780 16:08:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:32.780 16:08:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:32.780 16:08:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.780 16:08:16 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:32.780 16:08:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.780 16:08:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.780 ************************************ 00:04:32.780 START TEST event_perf 00:04:32.780 ************************************ 00:04:32.780 16:08:16 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.780 Running I/O for 1 seconds...[2024-07-12 16:08:16.300218] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:32.780 [2024-07-12 16:08:16.300475] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59895 ] 00:04:32.780 [2024-07-12 16:08:16.432652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.780 [2024-07-12 16:08:16.482092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.780 Running I/O for 1 seconds...[2024-07-12 16:08:16.482211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.780 [2024-07-12 16:08:16.482324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.780 [2024-07-12 16:08:16.482324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.155 00:04:34.155 lcore 0: 201611 00:04:34.155 lcore 1: 201610 00:04:34.155 lcore 2: 201610 00:04:34.155 lcore 3: 201610 00:04:34.155 done. 00:04:34.155 00:04:34.155 real 0m1.265s 00:04:34.155 user 0m4.107s 00:04:34.155 sys 0m0.038s 00:04:34.155 16:08:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.155 16:08:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.155 ************************************ 00:04:34.155 END TEST event_perf 00:04:34.155 ************************************ 00:04:34.155 16:08:17 event -- common/autotest_common.sh@1142 -- # return 0 00:04:34.155 16:08:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:34.155 16:08:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:34.155 16:08:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.155 16:08:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.155 ************************************ 00:04:34.155 START TEST event_reactor 00:04:34.155 ************************************ 00:04:34.155 16:08:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:34.155 [2024-07-12 16:08:17.614454] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:34.155 [2024-07-12 16:08:17.614717] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:04:34.155 [2024-07-12 16:08:17.748630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.155 [2024-07-12 16:08:17.805864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.569 test_start 00:04:35.569 oneshot 00:04:35.569 tick 100 00:04:35.569 tick 100 00:04:35.569 tick 250 00:04:35.569 tick 100 00:04:35.569 tick 100 00:04:35.569 tick 100 00:04:35.569 tick 500 00:04:35.569 tick 250 00:04:35.569 tick 100 00:04:35.569 tick 100 00:04:35.569 tick 250 00:04:35.569 tick 100 00:04:35.569 tick 100 00:04:35.569 test_end 00:04:35.569 00:04:35.569 real 0m1.280s 00:04:35.569 user 0m1.138s 00:04:35.569 sys 0m0.036s 00:04:35.569 16:08:18 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.569 16:08:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:35.569 ************************************ 00:04:35.569 END TEST event_reactor 00:04:35.569 ************************************ 00:04:35.569 16:08:18 event -- common/autotest_common.sh@1142 -- # return 0 00:04:35.569 16:08:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.569 16:08:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:35.569 16:08:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.569 16:08:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.569 ************************************ 00:04:35.569 START TEST event_reactor_perf 00:04:35.569 ************************************ 00:04:35.569 16:08:18 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.569 [2024-07-12 16:08:18.941813] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:35.569 [2024-07-12 16:08:18.942104] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59964 ] 00:04:35.569 [2024-07-12 16:08:19.078467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.569 [2024-07-12 16:08:19.135938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.507 test_start 00:04:36.507 test_end 00:04:36.507 Performance: 417277 events per second 00:04:36.507 00:04:36.507 real 0m1.274s 00:04:36.507 user 0m1.123s 00:04:36.507 sys 0m0.044s 00:04:36.507 16:08:20 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.507 ************************************ 00:04:36.507 END TEST event_reactor_perf 00:04:36.507 16:08:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.507 ************************************ 00:04:36.766 16:08:20 event -- common/autotest_common.sh@1142 -- # return 0 00:04:36.766 16:08:20 event -- event/event.sh@49 -- # uname -s 00:04:36.766 16:08:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:36.766 16:08:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:36.766 16:08:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.766 16:08:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.766 16:08:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.766 ************************************ 00:04:36.766 START TEST event_scheduler 00:04:36.766 ************************************ 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:36.766 * Looking for test storage... 00:04:36.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:36.766 16:08:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.766 16:08:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60025 00:04:36.766 16:08:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.766 16:08:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60025 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60025 ']' 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.766 16:08:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.766 16:08:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.766 [2024-07-12 16:08:20.382298] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:36.766 [2024-07-12 16:08:20.382403] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:04:37.025 [2024-07-12 16:08:20.520986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.025 [2024-07-12 16:08:20.592471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.025 [2024-07-12 16:08:20.592629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.025 [2024-07-12 16:08:20.592716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.025 [2024-07-12 16:08:20.593421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:37.960 16:08:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.960 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.960 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.960 POWER: Cannot set governor of lcore 0 to performance 00:04:37.960 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.960 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.960 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.960 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.960 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:37.960 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:37.960 POWER: Unable to set Power Management Environment for lcore 0 00:04:37.960 [2024-07-12 16:08:21.358612] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:37.960 [2024-07-12 16:08:21.358624] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:37.960 [2024-07-12 16:08:21.358632] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:37.960 [2024-07-12 16:08:21.358644] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:37.960 [2024-07-12 16:08:21.358651] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:37.960 [2024-07-12 16:08:21.358657] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.960 16:08:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 [2024-07-12 16:08:21.392436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.960 [2024-07-12 16:08:21.408463] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.960 16:08:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.960 16:08:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 ************************************ 00:04:37.960 START TEST scheduler_create_thread 00:04:37.960 ************************************ 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 2 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 3 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 4 00:04:37.960 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 5 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 6 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 7 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 8 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 9 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 10 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.961 16:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.528 16:08:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.528 16:08:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.528 16:08:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.528 16:08:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.528 16:08:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.463 16:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.463 00:04:39.463 real 0m1.752s 00:04:39.463 user 0m0.018s 00:04:39.463 sys 0m0.006s 00:04:39.463 16:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.463 16:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.463 ************************************ 00:04:39.463 END TEST scheduler_create_thread 00:04:39.463 ************************************ 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:39.721 16:08:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:39.721 16:08:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60025 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60025 ']' 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60025 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60025 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60025' 00:04:39.721 killing process with pid 60025 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60025 00:04:39.721 16:08:23 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60025 00:04:39.979 [2024-07-12 16:08:23.650448] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:40.237 00:04:40.237 real 0m3.540s 00:04:40.237 user 0m6.564s 00:04:40.237 sys 0m0.330s 00:04:40.237 16:08:23 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.237 ************************************ 00:04:40.237 END TEST event_scheduler 00:04:40.237 ************************************ 00:04:40.237 16:08:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.237 16:08:23 event -- common/autotest_common.sh@1142 -- # return 0 00:04:40.237 16:08:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:40.237 16:08:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:40.237 16:08:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.237 16:08:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.237 16:08:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.237 ************************************ 00:04:40.237 START TEST app_repeat 00:04:40.237 ************************************ 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60114 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.237 Process app_repeat pid: 60114 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60114' 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.237 spdk_app_start Round 0 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:40.237 16:08:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60114 /var/tmp/spdk-nbd.sock 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60114 ']' 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.237 16:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.237 [2024-07-12 16:08:23.866080] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:40.237 [2024-07-12 16:08:23.866147] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:04:40.496 [2024-07-12 16:08:23.998736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.496 [2024-07-12 16:08:24.055151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.496 [2024-07-12 16:08:24.055158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.496 [2024-07-12 16:08:24.081772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:40.496 16:08:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.496 16:08:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:40.496 16:08:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.755 Malloc0 00:04:40.755 16:08:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.014 Malloc1 00:04:41.014 16:08:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.014 16:08:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.274 /dev/nbd0 00:04:41.274 16:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.274 16:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.274 1+0 records in 00:04:41.274 1+0 records out 00:04:41.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254645 s, 16.1 MB/s 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:41.274 16:08:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:41.274 16:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.274 16:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.274 16:08:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.533 /dev/nbd1 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.533 1+0 records in 00:04:41.533 1+0 records out 00:04:41.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245197 s, 16.7 MB/s 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:41.533 16:08:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.533 16:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.792 { 00:04:41.792 "nbd_device": "/dev/nbd0", 00:04:41.792 "bdev_name": "Malloc0" 00:04:41.792 }, 00:04:41.792 { 00:04:41.792 "nbd_device": "/dev/nbd1", 00:04:41.792 "bdev_name": "Malloc1" 00:04:41.792 } 00:04:41.792 ]' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.792 { 00:04:41.792 "nbd_device": "/dev/nbd0", 00:04:41.792 "bdev_name": "Malloc0" 00:04:41.792 }, 00:04:41.792 { 00:04:41.792 "nbd_device": "/dev/nbd1", 00:04:41.792 "bdev_name": "Malloc1" 00:04:41.792 } 00:04:41.792 ]' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.792 /dev/nbd1' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.792 /dev/nbd1' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.792 256+0 records in 00:04:41.792 256+0 records out 00:04:41.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841002 s, 125 MB/s 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.792 16:08:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.050 256+0 records in 00:04:42.050 256+0 records out 00:04:42.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238594 s, 43.9 MB/s 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:42.050 256+0 records in 00:04:42.050 256+0 records out 00:04:42.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233119 s, 45.0 MB/s 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:42.050 16:08:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.051 16:08:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.310 16:08:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.568 16:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.826 16:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.826 16:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.827 16:08:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.827 16:08:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.085 16:08:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.342 [2024-07-12 16:08:26.820803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.342 [2024-07-12 16:08:26.869302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.342 [2024-07-12 16:08:26.869314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.342 [2024-07-12 16:08:26.896793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:43.342 [2024-07-12 16:08:26.896867] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.342 [2024-07-12 16:08:26.896904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.622 16:08:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.622 spdk_app_start Round 1 00:04:46.622 16:08:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:46.622 16:08:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60114 /var/tmp/spdk-nbd.sock 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60114 ']' 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.622 16:08:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:46.622 16:08:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.622 Malloc0 00:04:46.622 16:08:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.880 Malloc1 00:04:46.880 16:08:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.880 16:08:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.881 16:08:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.139 /dev/nbd0 00:04:47.139 16:08:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.139 16:08:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.139 1+0 records in 00:04:47.139 1+0 records out 00:04:47.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281393 s, 14.6 MB/s 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:47.139 16:08:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:47.139 16:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.139 16:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.139 16:08:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.398 /dev/nbd1 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.398 1+0 records in 00:04:47.398 1+0 records out 00:04:47.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284001 s, 14.4 MB/s 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:47.398 16:08:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.398 16:08:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.657 { 00:04:47.657 "nbd_device": "/dev/nbd0", 00:04:47.657 "bdev_name": "Malloc0" 00:04:47.657 }, 00:04:47.657 { 00:04:47.657 "nbd_device": "/dev/nbd1", 00:04:47.657 "bdev_name": "Malloc1" 00:04:47.657 } 00:04:47.657 ]' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.657 { 00:04:47.657 "nbd_device": "/dev/nbd0", 00:04:47.657 "bdev_name": "Malloc0" 00:04:47.657 }, 00:04:47.657 { 00:04:47.657 "nbd_device": "/dev/nbd1", 00:04:47.657 "bdev_name": "Malloc1" 00:04:47.657 } 00:04:47.657 ]' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.657 /dev/nbd1' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.657 /dev/nbd1' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.657 256+0 records in 00:04:47.657 256+0 records out 00:04:47.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618759 s, 169 MB/s 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.657 256+0 records in 00:04:47.657 256+0 records out 00:04:47.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260075 s, 40.3 MB/s 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.657 16:08:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.915 256+0 records in 00:04:47.915 256+0 records out 00:04:47.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273888 s, 38.3 MB/s 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.915 16:08:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.174 16:08:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.432 16:08:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.691 16:08:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.691 16:08:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.950 16:08:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.209 [2024-07-12 16:08:32.729535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.209 [2024-07-12 16:08:32.788793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.209 [2024-07-12 16:08:32.788805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.209 [2024-07-12 16:08:32.819712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:49.209 [2024-07-12 16:08:32.819801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.209 [2024-07-12 16:08:32.819814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.517 16:08:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.517 spdk_app_start Round 2 00:04:52.517 16:08:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:52.517 16:08:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60114 /var/tmp/spdk-nbd.sock 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60114 ']' 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.517 16:08:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:52.517 16:08:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.517 Malloc0 00:04:52.517 16:08:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.783 Malloc1 00:04:52.783 16:08:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.783 16:08:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.042 /dev/nbd0 00:04:53.042 16:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.042 16:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.042 1+0 records in 00:04:53.042 1+0 records out 00:04:53.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426094 s, 9.6 MB/s 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.042 16:08:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.042 16:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.042 16:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.042 16:08:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.042 /dev/nbd1 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.300 1+0 records in 00:04:53.300 1+0 records out 00:04:53.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365555 s, 11.2 MB/s 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.300 16:08:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.300 16:08:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.559 { 00:04:53.559 "nbd_device": "/dev/nbd0", 00:04:53.559 "bdev_name": "Malloc0" 00:04:53.559 }, 00:04:53.559 { 00:04:53.559 "nbd_device": "/dev/nbd1", 00:04:53.559 "bdev_name": "Malloc1" 00:04:53.559 } 00:04:53.559 ]' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.559 { 00:04:53.559 "nbd_device": "/dev/nbd0", 00:04:53.559 "bdev_name": "Malloc0" 00:04:53.559 }, 00:04:53.559 { 00:04:53.559 "nbd_device": "/dev/nbd1", 00:04:53.559 "bdev_name": "Malloc1" 00:04:53.559 } 00:04:53.559 ]' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.559 /dev/nbd1' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.559 /dev/nbd1' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.559 256+0 records in 00:04:53.559 256+0 records out 00:04:53.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00825647 s, 127 MB/s 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.559 256+0 records in 00:04:53.559 256+0 records out 00:04:53.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216883 s, 48.3 MB/s 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.559 256+0 records in 00:04:53.559 256+0 records out 00:04:53.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317633 s, 33.0 MB/s 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.559 16:08:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.818 16:08:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.077 16:08:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.336 16:08:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.336 16:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.336 16:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.336 16:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.336 16:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.336 16:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.336 16:08:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.336 16:08:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.336 16:08:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.336 16:08:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.336 16:08:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.336 16:08:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.336 16:08:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.596 16:08:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.855 [2024-07-12 16:08:38.380623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.855 [2024-07-12 16:08:38.429236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.855 [2024-07-12 16:08:38.429246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.855 [2024-07-12 16:08:38.456951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.855 [2024-07-12 16:08:38.457090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.855 [2024-07-12 16:08:38.457104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.142 16:08:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60114 /var/tmp/spdk-nbd.sock 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60114 ']' 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.142 16:08:41 event.app_repeat -- event/event.sh@39 -- # killprocess 60114 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60114 ']' 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60114 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60114 00:04:58.142 killing process with pid 60114 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60114' 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60114 00:04:58.142 16:08:41 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60114 00:04:58.142 spdk_app_start is called in Round 0. 00:04:58.142 Shutdown signal received, stop current app iteration 00:04:58.142 Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 reinitialization... 00:04:58.142 spdk_app_start is called in Round 1. 00:04:58.142 Shutdown signal received, stop current app iteration 00:04:58.142 Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 reinitialization... 00:04:58.142 spdk_app_start is called in Round 2. 00:04:58.142 Shutdown signal received, stop current app iteration 00:04:58.142 Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 reinitialization... 00:04:58.142 spdk_app_start is called in Round 3. 00:04:58.143 Shutdown signal received, stop current app iteration 00:04:58.143 16:08:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:58.143 16:08:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:58.143 00:04:58.143 real 0m17.858s 00:04:58.143 user 0m40.486s 00:04:58.143 sys 0m2.471s 00:04:58.143 16:08:41 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.143 ************************************ 00:04:58.143 END TEST app_repeat 00:04:58.143 ************************************ 00:04:58.143 16:08:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.143 16:08:41 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.143 16:08:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:58.143 16:08:41 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:58.143 16:08:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.143 16:08:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.143 16:08:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.143 ************************************ 00:04:58.143 START TEST cpu_locks 00:04:58.143 ************************************ 00:04:58.143 16:08:41 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:58.143 * Looking for test storage... 00:04:58.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:58.143 16:08:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:58.143 16:08:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:58.143 16:08:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:58.143 16:08:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:58.143 16:08:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.143 16:08:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.143 16:08:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.143 ************************************ 00:04:58.143 START TEST default_locks 00:04:58.143 ************************************ 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60528 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60528 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60528 ']' 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.143 16:08:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.402 [2024-07-12 16:08:41.900707] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:04:58.402 [2024-07-12 16:08:41.900814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60528 ] 00:04:58.402 [2024-07-12 16:08:42.031383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.402 [2024-07-12 16:08:42.081950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.402 [2024-07-12 16:08:42.108276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:59.337 16:08:42 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.337 16:08:42 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:59.337 16:08:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60528 00:04:59.338 16:08:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60528 00:04:59.338 16:08:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60528 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60528 ']' 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60528 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60528 00:04:59.596 killing process with pid 60528 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60528' 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60528 00:04:59.596 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60528 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60528 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60528 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60528 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60528 ']' 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ERROR: process (pid: 60528) is no longer running 00:04:59.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60528) - No such process 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.855 00:04:59.855 real 0m1.641s 00:04:59.855 user 0m1.847s 00:04:59.855 sys 0m0.397s 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.855 ************************************ 00:04:59.855 END TEST default_locks 00:04:59.855 ************************************ 00:04:59.855 16:08:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 16:08:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:59.855 16:08:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.855 16:08:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.855 16:08:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.855 16:08:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ************************************ 00:04:59.855 START TEST default_locks_via_rpc 00:04:59.855 ************************************ 00:04:59.855 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:59.855 16:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60580 00:04:59.855 16:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60580 00:04:59.855 16:08:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.855 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60580 ']' 00:04:59.855 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.856 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.856 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.856 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.856 16:08:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.115 [2024-07-12 16:08:43.588924] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:00.115 [2024-07-12 16:08:43.589069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60580 ] 00:05:00.115 [2024-07-12 16:08:43.720319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.115 [2024-07-12 16:08:43.777219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.115 [2024-07-12 16:08:43.803676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60580 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.052 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60580 00:05:01.311 16:08:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60580 00:05:01.311 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60580 ']' 00:05:01.311 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60580 00:05:01.311 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:01.311 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.311 16:08:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60580 00:05:01.311 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.311 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.311 killing process with pid 60580 00:05:01.311 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60580' 00:05:01.311 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60580 00:05:01.311 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60580 00:05:01.570 00:05:01.570 real 0m1.734s 00:05:01.570 user 0m1.988s 00:05:01.570 sys 0m0.422s 00:05:01.570 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.570 16:08:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.570 ************************************ 00:05:01.570 END TEST default_locks_via_rpc 00:05:01.570 ************************************ 00:05:01.829 16:08:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.829 16:08:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.829 16:08:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.829 16:08:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.829 16:08:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.829 ************************************ 00:05:01.829 START TEST non_locking_app_on_locked_coremask 00:05:01.829 ************************************ 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60631 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60631 /var/tmp/spdk.sock 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60631 ']' 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.829 16:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.829 [2024-07-12 16:08:45.382308] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:01.829 [2024-07-12 16:08:45.382428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:05:01.829 [2024-07-12 16:08:45.522023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.089 [2024-07-12 16:08:45.575001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.089 [2024-07-12 16:08:45.603044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.657 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.657 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:02.657 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60647 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60647 /var/tmp/spdk2.sock 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60647 ']' 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.658 16:08:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.917 [2024-07-12 16:08:46.395446] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:02.917 [2024-07-12 16:08:46.395539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:05:02.917 [2024-07-12 16:08:46.530456] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.917 [2024-07-12 16:08:46.530525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.176 [2024-07-12 16:08:46.648895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.176 [2024-07-12 16:08:46.711717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.743 16:08:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.743 16:08:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:03.743 16:08:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60631 00:05:03.743 16:08:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60631 00:05:03.743 16:08:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60631 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60631 ']' 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60631 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60631 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.680 killing process with pid 60631 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60631' 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60631 00:05:04.680 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60631 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60647 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60647 ']' 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60647 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60647 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.940 killing process with pid 60647 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60647' 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60647 00:05:04.940 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60647 00:05:05.199 00:05:05.199 real 0m3.553s 00:05:05.199 user 0m4.153s 00:05:05.199 sys 0m0.859s 00:05:05.199 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.199 16:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.199 ************************************ 00:05:05.199 END TEST non_locking_app_on_locked_coremask 00:05:05.199 ************************************ 00:05:05.199 16:08:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.199 16:08:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:05.199 16:08:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.199 16:08:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.199 16:08:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.458 ************************************ 00:05:05.458 START TEST locking_app_on_unlocked_coremask 00:05:05.458 ************************************ 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60703 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60703 /var/tmp/spdk.sock 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60703 ']' 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.458 16:08:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.458 [2024-07-12 16:08:48.993297] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:05.458 [2024-07-12 16:08:48.993429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60703 ] 00:05:05.458 [2024-07-12 16:08:49.126030] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.458 [2024-07-12 16:08:49.126109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.458 [2024-07-12 16:08:49.177878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.717 [2024-07-12 16:08:49.205597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60719 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60719 /var/tmp/spdk2.sock 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60719 ']' 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.285 16:08:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.285 [2024-07-12 16:08:49.941875] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:06.285 [2024-07-12 16:08:49.942172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60719 ] 00:05:06.545 [2024-07-12 16:08:50.082184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.545 [2024-07-12 16:08:50.192954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.545 [2024-07-12 16:08:50.249379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.521 16:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.521 16:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.521 16:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60719 00:05:07.521 16:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60719 00:05:07.521 16:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.780 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60703 00:05:07.780 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60703 ']' 00:05:07.780 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60703 00:05:07.780 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60703 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.039 killing process with pid 60703 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60703' 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60703 00:05:08.039 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60703 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60719 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60719 ']' 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60719 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60719 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.299 killing process with pid 60719 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60719' 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60719 00:05:08.299 16:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60719 00:05:08.559 00:05:08.559 real 0m3.307s 00:05:08.559 user 0m3.860s 00:05:08.559 sys 0m0.782s 00:05:08.559 16:08:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.559 16:08:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.559 ************************************ 00:05:08.559 END TEST locking_app_on_unlocked_coremask 00:05:08.559 ************************************ 00:05:08.559 16:08:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:08.559 16:08:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:08.559 16:08:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.559 16:08:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.559 16:08:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.819 ************************************ 00:05:08.819 START TEST locking_app_on_locked_coremask 00:05:08.819 ************************************ 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60775 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60775 /var/tmp/spdk.sock 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60775 ']' 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.819 16:08:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.819 [2024-07-12 16:08:52.350454] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:08.819 [2024-07-12 16:08:52.350562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60775 ] 00:05:08.819 [2024-07-12 16:08:52.482744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.819 [2024-07-12 16:08:52.532766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.078 [2024-07-12 16:08:52.560891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60791 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60791 /var/tmp/spdk2.sock 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60791 /var/tmp/spdk2.sock 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60791 /var/tmp/spdk2.sock 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60791 ']' 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.646 16:08:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.646 [2024-07-12 16:08:53.351831] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:09.646 [2024-07-12 16:08:53.352105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60791 ] 00:05:09.905 [2024-07-12 16:08:53.489752] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60775 has claimed it. 00:05:09.905 [2024-07-12 16:08:53.489828] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.474 ERROR: process (pid: 60791) is no longer running 00:05:10.474 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60791) - No such process 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60775 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60775 00:05:10.474 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60775 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60775 ']' 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60775 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60775 00:05:11.041 killing process with pid 60775 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.041 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.042 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60775' 00:05:11.042 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60775 00:05:11.042 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60775 00:05:11.042 ************************************ 00:05:11.042 END TEST locking_app_on_locked_coremask 00:05:11.042 ************************************ 00:05:11.042 00:05:11.042 real 0m2.436s 00:05:11.042 user 0m2.968s 00:05:11.042 sys 0m0.481s 00:05:11.042 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.042 16:08:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 16:08:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:11.301 16:08:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:11.301 16:08:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.301 16:08:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.301 16:08:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 ************************************ 00:05:11.301 START TEST locking_overlapped_coremask 00:05:11.301 ************************************ 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:11.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60842 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60842 /var/tmp/spdk.sock 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60842 ']' 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.301 16:08:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 [2024-07-12 16:08:54.829291] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:11.301 [2024-07-12 16:08:54.829426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:05:11.301 [2024-07-12 16:08:54.961820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.301 [2024-07-12 16:08:55.012926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.301 [2024-07-12 16:08:55.013058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.301 [2024-07-12 16:08:55.013075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.560 [2024-07-12 16:08:55.041576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60847 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60847 /var/tmp/spdk2.sock 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60847 /var/tmp/spdk2.sock 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60847 /var/tmp/spdk2.sock 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60847 ']' 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.560 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.561 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.561 [2024-07-12 16:08:55.223440] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:11.561 [2024-07-12 16:08:55.223703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60847 ] 00:05:11.819 [2024-07-12 16:08:55.371665] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60842 has claimed it. 00:05:11.819 [2024-07-12 16:08:55.371782] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.386 ERROR: process (pid: 60847) is no longer running 00:05:12.386 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60847) - No such process 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.386 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60842 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60842 ']' 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60842 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60842 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60842' 00:05:12.387 killing process with pid 60842 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60842 00:05:12.387 16:08:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60842 00:05:12.645 00:05:12.645 real 0m1.413s 00:05:12.645 user 0m3.866s 00:05:12.645 sys 0m0.266s 00:05:12.645 ************************************ 00:05:12.645 END TEST locking_overlapped_coremask 00:05:12.645 ************************************ 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.645 16:08:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:12.645 16:08:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:12.645 16:08:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.645 16:08:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.645 16:08:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.645 ************************************ 00:05:12.645 START TEST locking_overlapped_coremask_via_rpc 00:05:12.645 ************************************ 00:05:12.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60887 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60887 /var/tmp/spdk.sock 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60887 ']' 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.645 16:08:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.645 [2024-07-12 16:08:56.295461] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:12.645 [2024-07-12 16:08:56.295548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60887 ] 00:05:12.903 [2024-07-12 16:08:56.429807] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.903 [2024-07-12 16:08:56.429996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.903 [2024-07-12 16:08:56.493628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.903 [2024-07-12 16:08:56.493759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.903 [2024-07-12 16:08:56.493765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.903 [2024-07-12 16:08:56.523767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60905 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60905 /var/tmp/spdk2.sock 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60905 ']' 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.837 16:08:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.837 [2024-07-12 16:08:57.280406] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:13.837 [2024-07-12 16:08:57.280499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60905 ] 00:05:13.837 [2024-07-12 16:08:57.424321] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.837 [2024-07-12 16:08:57.424372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.837 [2024-07-12 16:08:57.545941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.837 [2024-07-12 16:08:57.546082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:13.837 [2024-07-12 16:08:57.546086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.095 [2024-07-12 16:08:57.632460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.662 [2024-07-12 16:08:58.186041] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60887 has claimed it. 00:05:14.662 request: 00:05:14.662 { 00:05:14.662 "method": "framework_enable_cpumask_locks", 00:05:14.662 "req_id": 1 00:05:14.662 } 00:05:14.662 Got JSON-RPC error response 00:05:14.662 response: 00:05:14.662 { 00:05:14.662 "code": -32603, 00:05:14.662 "message": "Failed to claim CPU core: 2" 00:05:14.662 } 00:05:14.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60887 /var/tmp/spdk.sock 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60887 ']' 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.662 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60905 /var/tmp/spdk2.sock 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60905 ']' 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.921 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:15.180 00:05:15.180 real 0m2.514s 00:05:15.180 user 0m1.256s 00:05:15.180 sys 0m0.174s 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.180 16:08:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.180 ************************************ 00:05:15.180 END TEST locking_overlapped_coremask_via_rpc 00:05:15.180 ************************************ 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:15.180 16:08:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:15.180 16:08:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60887 ]] 00:05:15.180 16:08:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60887 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60887 ']' 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60887 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60887 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.180 killing process with pid 60887 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60887' 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60887 00:05:15.180 16:08:58 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60887 00:05:15.438 16:08:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60905 ]] 00:05:15.438 16:08:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60905 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60905 ']' 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60905 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60905 00:05:15.439 killing process with pid 60905 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60905' 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60905 00:05:15.439 16:08:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60905 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60887 ]] 00:05:15.697 Process with pid 60887 is not found 00:05:15.697 Process with pid 60905 is not found 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60887 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60887 ']' 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60887 00:05:15.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60887) - No such process 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60887 is not found' 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60905 ]] 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60905 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60905 ']' 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60905 00:05:15.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60905) - No such process 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60905 is not found' 00:05:15.697 16:08:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.697 00:05:15.697 real 0m17.603s 00:05:15.697 user 0m31.269s 00:05:15.697 sys 0m4.016s 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.697 16:08:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.697 ************************************ 00:05:15.697 END TEST cpu_locks 00:05:15.697 ************************************ 00:05:15.697 16:08:59 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.697 00:05:15.697 real 0m43.194s 00:05:15.697 user 1m24.809s 00:05:15.697 sys 0m7.162s 00:05:15.697 ************************************ 00:05:15.697 END TEST event 00:05:15.697 ************************************ 00:05:15.697 16:08:59 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.697 16:08:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.956 16:08:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.956 16:08:59 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:15.956 16:08:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.956 16:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.956 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.956 ************************************ 00:05:15.956 START TEST thread 00:05:15.956 ************************************ 00:05:15.956 16:08:59 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:15.956 * Looking for test storage... 00:05:15.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:15.956 16:08:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.956 16:08:59 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:15.956 16:08:59 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.956 16:08:59 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.956 ************************************ 00:05:15.956 START TEST thread_poller_perf 00:05:15.956 ************************************ 00:05:15.956 16:08:59 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.956 [2024-07-12 16:08:59.545037] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:15.956 [2024-07-12 16:08:59.545168] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 00:05:16.214 [2024-07-12 16:08:59.683796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.214 [2024-07-12 16:08:59.742476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.214 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:17.149 ====================================== 00:05:17.149 busy:2211950848 (cyc) 00:05:17.149 total_run_count: 348000 00:05:17.149 tsc_hz: 2200000000 (cyc) 00:05:17.149 ====================================== 00:05:17.149 poller_cost: 6356 (cyc), 2889 (nsec) 00:05:17.149 00:05:17.149 real 0m1.291s 00:05:17.149 user 0m1.149s 00:05:17.149 sys 0m0.035s 00:05:17.149 16:09:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.149 16:09:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.149 ************************************ 00:05:17.149 END TEST thread_poller_perf 00:05:17.150 ************************************ 00:05:17.150 16:09:00 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:17.150 16:09:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:17.150 16:09:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:17.150 16:09:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.150 16:09:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.150 ************************************ 00:05:17.150 START TEST thread_poller_perf 00:05:17.150 ************************************ 00:05:17.150 16:09:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:17.408 [2024-07-12 16:09:00.891581] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:17.408 [2024-07-12 16:09:00.891686] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:05:17.408 [2024-07-12 16:09:01.029616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.408 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:17.408 [2024-07-12 16:09:01.081166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.784 ====================================== 00:05:18.784 busy:2201833530 (cyc) 00:05:18.784 total_run_count: 4636000 00:05:18.784 tsc_hz: 2200000000 (cyc) 00:05:18.784 ====================================== 00:05:18.784 poller_cost: 474 (cyc), 215 (nsec) 00:05:18.784 ************************************ 00:05:18.784 END TEST thread_poller_perf 00:05:18.784 ************************************ 00:05:18.784 00:05:18.784 real 0m1.273s 00:05:18.784 user 0m1.129s 00:05:18.784 sys 0m0.039s 00:05:18.785 16:09:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.785 16:09:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 16:09:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:18.785 16:09:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:18.785 00:05:18.785 real 0m2.750s 00:05:18.785 user 0m2.337s 00:05:18.785 sys 0m0.187s 00:05:18.785 ************************************ 00:05:18.785 END TEST thread 00:05:18.785 ************************************ 00:05:18.785 16:09:02 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.785 16:09:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 16:09:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.785 16:09:02 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:18.785 16:09:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.785 16:09:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.785 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 ************************************ 00:05:18.785 START TEST accel 00:05:18.785 ************************************ 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:18.785 * Looking for test storage... 00:05:18.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:18.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.785 16:09:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:18.785 16:09:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:18.785 16:09:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:18.785 16:09:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61132 00:05:18.785 16:09:02 accel -- accel/accel.sh@63 -- # waitforlisten 61132 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@829 -- # '[' -z 61132 ']' 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.785 16:09:02 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:18.785 16:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 16:09:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:18.785 16:09:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.785 16:09:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.785 16:09:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.785 16:09:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.785 16:09:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.785 16:09:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:18.785 16:09:02 accel -- accel/accel.sh@41 -- # jq -r . 00:05:18.785 [2024-07-12 16:09:02.393346] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:18.785 [2024-07-12 16:09:02.393675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61132 ] 00:05:19.044 [2024-07-12 16:09:02.523795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.044 [2024-07-12 16:09:02.576775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.044 [2024-07-12 16:09:02.607356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.044 16:09:02 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.044 16:09:02 accel -- common/autotest_common.sh@862 -- # return 0 00:05:19.044 16:09:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:19.044 16:09:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:19.044 16:09:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:19.044 16:09:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:19.044 16:09:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:19.044 16:09:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:19.044 16:09:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:19.044 16:09:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.044 16:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.044 16:09:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:19.304 16:09:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:19.304 16:09:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:19.304 16:09:02 accel -- accel/accel.sh@75 -- # killprocess 61132 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@948 -- # '[' -z 61132 ']' 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@952 -- # kill -0 61132 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@953 -- # uname 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61132 00:05:19.304 killing process with pid 61132 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61132' 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@967 -- # kill 61132 00:05:19.304 16:09:02 accel -- common/autotest_common.sh@972 -- # wait 61132 00:05:19.563 16:09:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:19.563 16:09:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.563 16:09:03 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:19.563 16:09:03 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:19.563 16:09:03 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.563 16:09:03 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.563 16:09:03 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.563 16:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.563 ************************************ 00:05:19.563 START TEST accel_missing_filename 00:05:19.563 ************************************ 00:05:19.563 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:19.563 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:19.563 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:19.563 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:19.563 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.564 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:19.564 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.564 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:19.564 16:09:03 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:19.564 [2024-07-12 16:09:03.174435] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:19.564 [2024-07-12 16:09:03.174523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61176 ] 00:05:19.822 [2024-07-12 16:09:03.310956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.822 [2024-07-12 16:09:03.365649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.822 [2024-07-12 16:09:03.393745] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.822 [2024-07-12 16:09:03.431201] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:19.822 A filename is required. 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.822 00:05:19.822 real 0m0.349s 00:05:19.822 user 0m0.241s 00:05:19.822 sys 0m0.073s 00:05:19.822 ************************************ 00:05:19.822 END TEST accel_missing_filename 00:05:19.822 ************************************ 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.822 16:09:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:19.822 16:09:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.822 16:09:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.822 16:09:03 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:19.822 16:09:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.822 16:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.086 ************************************ 00:05:20.086 START TEST accel_compress_verify 00:05:20.086 ************************************ 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:20.086 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.087 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:20.087 16:09:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:20.087 [2024-07-12 16:09:03.580562] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:20.087 [2024-07-12 16:09:03.581141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:05:20.087 [2024-07-12 16:09:03.718468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.087 [2024-07-12 16:09:03.770369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.087 [2024-07-12 16:09:03.798293] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.370 [2024-07-12 16:09:03.837851] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:20.370 00:05:20.370 Compression does not support the verify option, aborting. 00:05:20.370 ************************************ 00:05:20.370 END TEST accel_compress_verify 00:05:20.370 ************************************ 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.370 00:05:20.370 real 0m0.355s 00:05:20.370 user 0m0.215s 00:05:20.370 sys 0m0.081s 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.370 16:09:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:20.370 16:09:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.370 16:09:03 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:20.370 16:09:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:20.370 16:09:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.370 16:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.370 ************************************ 00:05:20.370 START TEST accel_wrong_workload 00:05:20.370 ************************************ 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:20.370 16:09:03 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:20.370 Unsupported workload type: foobar 00:05:20.370 [2024-07-12 16:09:03.974202] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:20.370 accel_perf options: 00:05:20.370 [-h help message] 00:05:20.370 [-q queue depth per core] 00:05:20.370 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:20.370 [-T number of threads per core 00:05:20.370 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:20.370 [-t time in seconds] 00:05:20.370 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:20.370 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:20.370 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:20.370 [-l for compress/decompress workloads, name of uncompressed input file 00:05:20.370 [-S for crc32c workload, use this seed value (default 0) 00:05:20.370 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:20.370 [-f for fill workload, use this BYTE value (default 255) 00:05:20.370 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:20.370 [-y verify result if this switch is on] 00:05:20.370 [-a tasks to allocate per core (default: same value as -q)] 00:05:20.370 Can be used to spread operations across a wider range of memory. 00:05:20.370 ************************************ 00:05:20.370 END TEST accel_wrong_workload 00:05:20.370 ************************************ 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.370 00:05:20.370 real 0m0.027s 00:05:20.370 user 0m0.010s 00:05:20.370 sys 0m0.016s 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.370 16:09:03 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:20.370 16:09:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.370 16:09:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:20.370 16:09:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:20.370 16:09:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.370 16:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.370 ************************************ 00:05:20.370 START TEST accel_negative_buffers 00:05:20.370 ************************************ 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:20.370 16:09:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:20.370 -x option must be non-negative. 00:05:20.370 [2024-07-12 16:09:04.047301] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:20.370 accel_perf options: 00:05:20.370 [-h help message] 00:05:20.370 [-q queue depth per core] 00:05:20.370 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:20.370 [-T number of threads per core 00:05:20.370 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:20.370 [-t time in seconds] 00:05:20.370 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:20.370 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:20.370 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:20.370 [-l for compress/decompress workloads, name of uncompressed input file 00:05:20.370 [-S for crc32c workload, use this seed value (default 0) 00:05:20.370 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:20.370 [-f for fill workload, use this BYTE value (default 255) 00:05:20.370 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:20.370 [-y verify result if this switch is on] 00:05:20.370 [-a tasks to allocate per core (default: same value as -q)] 00:05:20.370 Can be used to spread operations across a wider range of memory. 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.370 ************************************ 00:05:20.370 END TEST accel_negative_buffers 00:05:20.370 ************************************ 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.370 00:05:20.370 real 0m0.028s 00:05:20.370 user 0m0.016s 00:05:20.370 sys 0m0.010s 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.370 16:09:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:20.643 16:09:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.643 16:09:04 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:20.643 16:09:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:20.643 16:09:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.643 16:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.643 ************************************ 00:05:20.643 START TEST accel_crc32c 00:05:20.643 ************************************ 00:05:20.643 16:09:04 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:20.643 [2024-07-12 16:09:04.128435] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:20.643 [2024-07-12 16:09:04.128528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:05:20.643 [2024-07-12 16:09:04.267205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.643 [2024-07-12 16:09:04.322082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.643 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.644 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.902 16:09:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:21.837 16:09:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.837 00:05:21.837 real 0m1.363s 00:05:21.837 user 0m1.194s 00:05:21.837 sys 0m0.074s 00:05:21.837 16:09:05 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.837 ************************************ 00:05:21.837 END TEST accel_crc32c 00:05:21.837 ************************************ 00:05:21.837 16:09:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:21.837 16:09:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.837 16:09:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:21.837 16:09:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:21.837 16:09:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.837 16:09:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.837 ************************************ 00:05:21.837 START TEST accel_crc32c_C2 00:05:21.837 ************************************ 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.837 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:21.837 [2024-07-12 16:09:05.543554] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:21.837 [2024-07-12 16:09:05.543641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:05:22.095 [2024-07-12 16:09:05.678432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.095 [2024-07-12 16:09:05.736376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.095 16:09:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.473 ************************************ 00:05:23.473 END TEST accel_crc32c_C2 00:05:23.473 ************************************ 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.473 00:05:23.473 real 0m1.352s 00:05:23.473 user 0m1.177s 00:05:23.473 sys 0m0.077s 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.473 16:09:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:23.473 16:09:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.473 16:09:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:23.473 16:09:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.473 16:09:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.473 16:09:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.473 ************************************ 00:05:23.473 START TEST accel_copy 00:05:23.473 ************************************ 00:05:23.473 16:09:06 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:23.473 16:09:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:23.473 [2024-07-12 16:09:06.952003] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:23.474 [2024-07-12 16:09:06.952085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61323 ] 00:05:23.474 [2024-07-12 16:09:07.081483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.474 [2024-07-12 16:09:07.134270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 16:09:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:24.851 16:09:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.851 00:05:24.851 real 0m1.345s 00:05:24.851 user 0m1.177s 00:05:24.851 sys 0m0.068s 00:05:24.851 ************************************ 00:05:24.851 END TEST accel_copy 00:05:24.851 ************************************ 00:05:24.851 16:09:08 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.851 16:09:08 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:24.851 16:09:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.851 16:09:08 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:24.851 16:09:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:24.851 16:09:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.851 16:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.851 ************************************ 00:05:24.851 START TEST accel_fill 00:05:24.851 ************************************ 00:05:24.851 16:09:08 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:24.851 [2024-07-12 16:09:08.348845] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:24.851 [2024-07-12 16:09:08.348937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61357 ] 00:05:24.851 [2024-07-12 16:09:08.479341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.851 [2024-07-12 16:09:08.529358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:24.851 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.852 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.111 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.111 16:09:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.111 16:09:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.111 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.111 16:09:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:26.048 ************************************ 00:05:26.048 END TEST accel_fill 00:05:26.048 ************************************ 00:05:26.048 16:09:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.048 00:05:26.048 real 0m1.341s 00:05:26.048 user 0m1.181s 00:05:26.048 sys 0m0.062s 00:05:26.048 16:09:09 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.048 16:09:09 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:26.048 16:09:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.048 16:09:09 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:26.048 16:09:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:26.048 16:09:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.048 16:09:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.048 ************************************ 00:05:26.048 START TEST accel_copy_crc32c 00:05:26.048 ************************************ 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:26.048 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:26.048 [2024-07-12 16:09:09.748409] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:26.048 [2024-07-12 16:09:09.748519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61386 ] 00:05:26.308 [2024-07-12 16:09:09.884963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.308 [2024-07-12 16:09:09.937873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.308 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.309 16:09:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.689 00:05:27.689 real 0m1.353s 00:05:27.689 user 0m1.181s 00:05:27.689 sys 0m0.081s 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.689 16:09:11 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:27.689 ************************************ 00:05:27.689 END TEST accel_copy_crc32c 00:05:27.689 ************************************ 00:05:27.689 16:09:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.689 16:09:11 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:27.689 16:09:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:27.689 16:09:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.689 16:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.689 ************************************ 00:05:27.689 START TEST accel_copy_crc32c_C2 00:05:27.689 ************************************ 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:27.689 [2024-07-12 16:09:11.148509] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:27.689 [2024-07-12 16:09:11.148610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61421 ] 00:05:27.689 [2024-07-12 16:09:11.282932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.689 [2024-07-12 16:09:11.334700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.689 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.690 16:09:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.069 00:05:29.069 real 0m1.341s 00:05:29.069 user 0m1.179s 00:05:29.069 sys 0m0.069s 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.069 16:09:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:29.069 ************************************ 00:05:29.069 END TEST accel_copy_crc32c_C2 00:05:29.069 ************************************ 00:05:29.069 16:09:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.069 16:09:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:29.069 16:09:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:29.069 16:09:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.069 16:09:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.069 ************************************ 00:05:29.069 START TEST accel_dualcast 00:05:29.069 ************************************ 00:05:29.069 16:09:12 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:29.069 [2024-07-12 16:09:12.540262] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:29.069 [2024-07-12 16:09:12.540342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61455 ] 00:05:29.069 [2024-07-12 16:09:12.677944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.069 [2024-07-12 16:09:12.729020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:29.069 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.070 16:09:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:30.446 16:09:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.446 00:05:30.446 real 0m1.346s 00:05:30.446 user 0m1.183s 00:05:30.446 sys 0m0.073s 00:05:30.446 16:09:13 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.446 ************************************ 00:05:30.446 END TEST accel_dualcast 00:05:30.446 ************************************ 00:05:30.446 16:09:13 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:30.446 16:09:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.446 16:09:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:30.446 16:09:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:30.446 16:09:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.446 16:09:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.446 ************************************ 00:05:30.446 START TEST accel_compare 00:05:30.446 ************************************ 00:05:30.446 16:09:13 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:30.446 16:09:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:30.446 16:09:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:30.446 16:09:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.446 16:09:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:30.447 16:09:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:30.447 [2024-07-12 16:09:13.937580] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:30.447 [2024-07-12 16:09:13.937677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:05:30.447 [2024-07-12 16:09:14.073035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.447 [2024-07-12 16:09:14.124137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.447 16:09:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:31.825 16:09:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.825 00:05:31.825 real 0m1.359s 00:05:31.825 user 0m1.192s 00:05:31.825 sys 0m0.078s 00:05:31.825 16:09:15 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.826 16:09:15 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:31.826 ************************************ 00:05:31.826 END TEST accel_compare 00:05:31.826 ************************************ 00:05:31.826 16:09:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.826 16:09:15 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:31.826 16:09:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.826 16:09:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.826 16:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.826 ************************************ 00:05:31.826 START TEST accel_xor 00:05:31.826 ************************************ 00:05:31.826 16:09:15 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:31.826 16:09:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:31.826 [2024-07-12 16:09:15.343714] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:31.826 [2024-07-12 16:09:15.343809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61519 ] 00:05:31.826 [2024-07-12 16:09:15.482318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.826 [2024-07-12 16:09:15.536635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.085 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.086 16:09:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.023 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.024 00:05:33.024 real 0m1.349s 00:05:33.024 user 0m1.196s 00:05:33.024 sys 0m0.064s 00:05:33.024 16:09:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.024 16:09:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:33.024 ************************************ 00:05:33.024 END TEST accel_xor 00:05:33.024 ************************************ 00:05:33.024 16:09:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.024 16:09:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:33.024 16:09:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.024 16:09:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.024 16:09:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.024 ************************************ 00:05:33.024 START TEST accel_xor 00:05:33.024 ************************************ 00:05:33.024 16:09:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:33.024 16:09:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:33.024 [2024-07-12 16:09:16.745784] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:33.024 [2024-07-12 16:09:16.745959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61553 ] 00:05:33.284 [2024-07-12 16:09:16.885392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.284 [2024-07-12 16:09:16.940767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.284 16:09:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.663 16:09:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:34.664 16:09:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.664 00:05:34.664 real 0m1.367s 00:05:34.664 user 0m0.018s 00:05:34.664 sys 0m0.004s 00:05:34.664 16:09:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.664 16:09:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:34.664 ************************************ 00:05:34.664 END TEST accel_xor 00:05:34.664 ************************************ 00:05:34.664 16:09:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.664 16:09:18 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:34.664 16:09:18 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:34.664 16:09:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.664 16:09:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.664 ************************************ 00:05:34.664 START TEST accel_dif_verify 00:05:34.664 ************************************ 00:05:34.664 16:09:18 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:34.664 16:09:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:34.664 [2024-07-12 16:09:18.171589] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:34.664 [2024-07-12 16:09:18.171697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61588 ] 00:05:34.664 [2024-07-12 16:09:18.310449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.664 [2024-07-12 16:09:18.362971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.931 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.932 16:09:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:35.883 16:09:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.883 00:05:35.883 real 0m1.357s 00:05:35.883 user 0m1.193s 00:05:35.883 sys 0m0.074s 00:05:35.883 16:09:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.883 16:09:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:35.883 ************************************ 00:05:35.883 END TEST accel_dif_verify 00:05:35.883 ************************************ 00:05:35.883 16:09:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.883 16:09:19 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:35.883 16:09:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:35.883 16:09:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.883 16:09:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.883 ************************************ 00:05:35.883 START TEST accel_dif_generate 00:05:35.883 ************************************ 00:05:35.883 16:09:19 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:35.883 16:09:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:35.883 [2024-07-12 16:09:19.581743] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:35.883 [2024-07-12 16:09:19.581854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61617 ] 00:05:36.143 [2024-07-12 16:09:19.715857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.143 [2024-07-12 16:09:19.766038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:36.143 16:09:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:36.144 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:36.144 16:09:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:37.521 16:09:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.521 00:05:37.521 real 0m1.345s 00:05:37.521 user 0m1.175s 00:05:37.521 sys 0m0.079s 00:05:37.521 16:09:20 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.521 16:09:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:37.521 ************************************ 00:05:37.521 END TEST accel_dif_generate 00:05:37.521 ************************************ 00:05:37.521 16:09:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.521 16:09:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:37.521 16:09:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:37.521 16:09:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.521 16:09:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.521 ************************************ 00:05:37.521 START TEST accel_dif_generate_copy 00:05:37.521 ************************************ 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:37.521 16:09:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:37.521 [2024-07-12 16:09:20.976238] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:37.521 [2024-07-12 16:09:20.976322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61651 ] 00:05:37.521 [2024-07-12 16:09:21.114567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.521 [2024-07-12 16:09:21.164769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.521 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 16:09:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.900 00:05:38.900 real 0m1.349s 00:05:38.900 user 0m1.184s 00:05:38.900 sys 0m0.074s 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.900 16:09:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:38.900 ************************************ 00:05:38.900 END TEST accel_dif_generate_copy 00:05:38.900 ************************************ 00:05:38.900 16:09:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.900 16:09:22 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:38.900 16:09:22 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.900 16:09:22 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:38.900 16:09:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.900 16:09:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.900 ************************************ 00:05:38.900 START TEST accel_comp 00:05:38.900 ************************************ 00:05:38.900 16:09:22 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:38.900 [2024-07-12 16:09:22.376802] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:38.900 [2024-07-12 16:09:22.376924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61686 ] 00:05:38.900 [2024-07-12 16:09:22.513994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.900 [2024-07-12 16:09:22.564636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.900 16:09:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.901 16:09:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:40.275 16:09:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.275 00:05:40.275 real 0m1.348s 00:05:40.275 user 0m1.188s 00:05:40.275 sys 0m0.069s 00:05:40.275 16:09:23 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.275 ************************************ 00:05:40.275 16:09:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:40.275 END TEST accel_comp 00:05:40.275 ************************************ 00:05:40.275 16:09:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.275 16:09:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.275 16:09:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:40.275 16:09:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.275 16:09:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.275 ************************************ 00:05:40.275 START TEST accel_decomp 00:05:40.275 ************************************ 00:05:40.275 16:09:23 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:40.275 [2024-07-12 16:09:23.778849] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:40.275 [2024-07-12 16:09:23.778950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61715 ] 00:05:40.275 [2024-07-12 16:09:23.916957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.275 [2024-07-12 16:09:23.966664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.275 16:09:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.533 16:09:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.533 16:09:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.533 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.534 16:09:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.468 16:09:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.468 00:05:41.468 real 0m1.353s 00:05:41.468 user 0m1.189s 00:05:41.468 sys 0m0.078s 00:05:41.468 16:09:25 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.468 ************************************ 00:05:41.468 END TEST accel_decomp 00:05:41.468 ************************************ 00:05:41.468 16:09:25 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:41.468 16:09:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.468 16:09:25 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:41.468 16:09:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:41.468 16:09:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.468 16:09:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.468 ************************************ 00:05:41.468 START TEST accel_decomp_full 00:05:41.468 ************************************ 00:05:41.468 16:09:25 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:41.468 16:09:25 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:41.468 [2024-07-12 16:09:25.185145] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:41.468 [2024-07-12 16:09:25.185241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61755 ] 00:05:41.727 [2024-07-12 16:09:25.316158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.727 [2024-07-12 16:09:25.366423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.727 16:09:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.101 16:09:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.101 00:05:43.101 real 0m1.358s 00:05:43.101 user 0m1.202s 00:05:43.101 sys 0m0.066s 00:05:43.101 16:09:26 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.101 ************************************ 00:05:43.101 END TEST accel_decomp_full 00:05:43.101 ************************************ 00:05:43.101 16:09:26 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:43.101 16:09:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.101 16:09:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.101 16:09:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:43.101 16:09:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.101 16:09:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.101 ************************************ 00:05:43.101 START TEST accel_decomp_mcore 00:05:43.101 ************************************ 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:43.101 [2024-07-12 16:09:26.594357] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:43.101 [2024-07-12 16:09:26.594464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61784 ] 00:05:43.101 [2024-07-12 16:09:26.731217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.101 [2024-07-12 16:09:26.784397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.101 [2024-07-12 16:09:26.784555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.101 [2024-07-12 16:09:26.785116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.101 [2024-07-12 16:09:26.785125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.101 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:43.102 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.360 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.361 16:09:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.298 00:05:44.298 real 0m1.368s 00:05:44.298 user 0m4.406s 00:05:44.298 sys 0m0.089s 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.298 16:09:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:44.298 ************************************ 00:05:44.298 END TEST accel_decomp_mcore 00:05:44.298 ************************************ 00:05:44.298 16:09:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.298 16:09:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.298 16:09:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:44.298 16:09:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.298 16:09:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.298 ************************************ 00:05:44.298 START TEST accel_decomp_full_mcore 00:05:44.298 ************************************ 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:44.298 16:09:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:44.298 [2024-07-12 16:09:28.015145] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:44.298 [2024-07-12 16:09:28.015245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:05:44.558 [2024-07-12 16:09:28.152156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.558 [2024-07-12 16:09:28.208694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.558 [2024-07-12 16:09:28.208838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.558 [2024-07-12 16:09:28.208988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.558 [2024-07-12 16:09:28.209211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.558 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.559 16:09:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.937 00:05:45.937 real 0m1.376s 00:05:45.937 user 0m4.439s 00:05:45.937 sys 0m0.091s 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.937 16:09:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:45.937 ************************************ 00:05:45.937 END TEST accel_decomp_full_mcore 00:05:45.937 ************************************ 00:05:45.937 16:09:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.938 16:09:29 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.938 16:09:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:45.938 16:09:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.938 16:09:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.938 ************************************ 00:05:45.938 START TEST accel_decomp_mthread 00:05:45.938 ************************************ 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:45.938 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:45.938 [2024-07-12 16:09:29.444246] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:45.938 [2024-07-12 16:09:29.444310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61859 ] 00:05:45.938 [2024-07-12 16:09:29.577448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.938 [2024-07-12 16:09:29.635483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.196 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.197 16:09:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.132 00:05:47.132 real 0m1.364s 00:05:47.132 user 0m1.205s 00:05:47.132 sys 0m0.067s 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.132 16:09:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:47.132 ************************************ 00:05:47.132 END TEST accel_decomp_mthread 00:05:47.132 ************************************ 00:05:47.132 16:09:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.133 16:09:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.133 16:09:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:47.133 16:09:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.133 16:09:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.133 ************************************ 00:05:47.133 START TEST accel_decomp_full_mthread 00:05:47.133 ************************************ 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:47.133 16:09:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:47.393 [2024-07-12 16:09:30.864108] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:47.393 [2024-07-12 16:09:30.864889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61888 ] 00:05:47.393 [2024-07-12 16:09:31.002796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.393 [2024-07-12 16:09:31.056636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:47.393 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.394 16:09:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.815 00:05:48.815 real 0m1.381s 00:05:48.815 user 0m1.224s 00:05:48.815 sys 0m0.066s 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.815 ************************************ 00:05:48.815 END TEST accel_decomp_full_mthread 00:05:48.815 ************************************ 00:05:48.815 16:09:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:48.815 16:09:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.815 16:09:32 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:48.815 16:09:32 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:48.815 16:09:32 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:48.815 16:09:32 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:48.815 16:09:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.815 16:09:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.815 16:09:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.815 16:09:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.815 16:09:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.815 16:09:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.815 16:09:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.815 16:09:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:48.815 16:09:32 accel -- accel/accel.sh@41 -- # jq -r . 00:05:48.815 ************************************ 00:05:48.815 START TEST accel_dif_functional_tests 00:05:48.815 ************************************ 00:05:48.815 16:09:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:48.815 [2024-07-12 16:09:32.312927] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:48.815 [2024-07-12 16:09:32.313625] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ] 00:05:48.815 [2024-07-12 16:09:32.451080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.815 [2024-07-12 16:09:32.508004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.815 [2024-07-12 16:09:32.508054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.815 [2024-07-12 16:09:32.508057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.815 [2024-07-12 16:09:32.538070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.074 00:05:49.074 00:05:49.074 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.074 http://cunit.sourceforge.net/ 00:05:49.074 00:05:49.074 00:05:49.074 Suite: accel_dif 00:05:49.074 Test: verify: DIF generated, GUARD check ...passed 00:05:49.074 Test: verify: DIF generated, APPTAG check ...passed 00:05:49.074 Test: verify: DIF generated, REFTAG check ...passed 00:05:49.074 Test: verify: DIF not generated, GUARD check ...[2024-07-12 16:09:32.559392] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:49.074 passed 00:05:49.074 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 16:09:32.559783] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:49.074 passed 00:05:49.074 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 16:09:32.560115] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:49.074 passed 00:05:49.074 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:49.074 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 16:09:32.560651] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:49.074 passed 00:05:49.074 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:49.074 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:49.074 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:49.074 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 16:09:32.561510] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:49.074 passed 00:05:49.074 Test: verify copy: DIF generated, GUARD check ...passed 00:05:49.074 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:49.074 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:49.074 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 16:09:32.562454] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:49.074 passed 00:05:49.074 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 16:09:32.562777] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:49.074 passed 00:05:49.074 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 16:09:32.563126] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:49.074 passed 00:05:49.074 Test: generate copy: DIF generated, GUARD check ...passed 00:05:49.074 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:49.074 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:49.074 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:49.074 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:49.074 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:49.074 Test: generate copy: iovecs-len validate ...passed 00:05:49.074 Test: generate copy: buffer alignment validate ...[2024-07-12 16:09:32.563904] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:49.074 passed 00:05:49.074 00:05:49.074 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.074 suites 1 1 n/a 0 0 00:05:49.074 tests 26 26 26 0 0 00:05:49.074 asserts 115 115 115 0 n/a 00:05:49.074 00:05:49.074 Elapsed time = 0.007 seconds 00:05:49.074 00:05:49.074 real 0m0.477s 00:05:49.074 user 0m0.539s 00:05:49.074 sys 0m0.101s 00:05:49.074 ************************************ 00:05:49.074 END TEST accel_dif_functional_tests 00:05:49.074 ************************************ 00:05:49.074 16:09:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.074 16:09:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:49.074 16:09:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.074 00:05:49.074 real 0m30.540s 00:05:49.074 user 0m32.657s 00:05:49.074 sys 0m2.793s 00:05:49.074 16:09:32 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.074 16:09:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.074 ************************************ 00:05:49.074 END TEST accel 00:05:49.074 ************************************ 00:05:49.335 16:09:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.335 16:09:32 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:49.335 16:09:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.335 16:09:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.335 16:09:32 -- common/autotest_common.sh@10 -- # set +x 00:05:49.335 ************************************ 00:05:49.335 START TEST accel_rpc 00:05:49.335 ************************************ 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:49.335 * Looking for test storage... 00:05:49.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:49.335 16:09:32 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.335 16:09:32 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61988 00:05:49.335 16:09:32 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:49.335 16:09:32 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 61988 00:05:49.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 61988 ']' 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.335 16:09:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.335 [2024-07-12 16:09:32.982297] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:49.335 [2024-07-12 16:09:32.982403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:05:49.601 [2024-07-12 16:09:33.115277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.601 [2024-07-12 16:09:33.169102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.601 16:09:33 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.601 16:09:33 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:49.601 16:09:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:49.601 16:09:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:49.601 16:09:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:49.601 16:09:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:49.601 16:09:33 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:49.601 16:09:33 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.601 16:09:33 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.601 16:09:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.601 ************************************ 00:05:49.601 START TEST accel_assign_opcode 00:05:49.601 ************************************ 00:05:49.601 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 [2024-07-12 16:09:33.213580] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 [2024-07-12 16:09:33.221559] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.602 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 [2024-07-12 16:09:33.260278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.861 software 00:05:49.861 00:05:49.861 real 0m0.203s 00:05:49.861 user 0m0.054s 00:05:49.861 sys 0m0.010s 00:05:49.861 ************************************ 00:05:49.861 END TEST accel_assign_opcode 00:05:49.861 ************************************ 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.861 16:09:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.861 16:09:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 61988 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 61988 ']' 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 61988 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61988 00:05:49.861 killing process with pid 61988 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61988' 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@967 -- # kill 61988 00:05:49.861 16:09:33 accel_rpc -- common/autotest_common.sh@972 -- # wait 61988 00:05:50.120 00:05:50.120 real 0m0.874s 00:05:50.120 user 0m0.883s 00:05:50.120 sys 0m0.288s 00:05:50.120 16:09:33 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.120 ************************************ 00:05:50.120 END TEST accel_rpc 00:05:50.120 ************************************ 00:05:50.120 16:09:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.120 16:09:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.120 16:09:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.120 16:09:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.120 16:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.120 16:09:33 -- common/autotest_common.sh@10 -- # set +x 00:05:50.120 ************************************ 00:05:50.120 START TEST app_cmdline 00:05:50.120 ************************************ 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.120 * Looking for test storage... 00:05:50.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:50.120 16:09:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.120 16:09:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62068 00:05:50.120 16:09:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62068 00:05:50.120 16:09:33 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62068 ']' 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.120 16:09:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.379 [2024-07-12 16:09:33.900546] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:50.379 [2024-07-12 16:09:33.900646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62068 ] 00:05:50.379 [2024-07-12 16:09:34.037664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.379 [2024-07-12 16:09:34.088065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.638 [2024-07-12 16:09:34.114899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.638 16:09:34 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.638 16:09:34 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:50.638 16:09:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:50.897 { 00:05:50.897 "version": "SPDK v24.09-pre git sha1 182dd7de4", 00:05:50.897 "fields": { 00:05:50.897 "major": 24, 00:05:50.897 "minor": 9, 00:05:50.897 "patch": 0, 00:05:50.897 "suffix": "-pre", 00:05:50.897 "commit": "182dd7de4" 00:05:50.897 } 00:05:50.897 } 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:50.897 16:09:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:50.897 16:09:34 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.157 request: 00:05:51.157 { 00:05:51.157 "method": "env_dpdk_get_mem_stats", 00:05:51.157 "req_id": 1 00:05:51.157 } 00:05:51.157 Got JSON-RPC error response 00:05:51.157 response: 00:05:51.157 { 00:05:51.157 "code": -32601, 00:05:51.157 "message": "Method not found" 00:05:51.157 } 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.157 16:09:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62068 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62068 ']' 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62068 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62068 00:05:51.157 killing process with pid 62068 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62068' 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@967 -- # kill 62068 00:05:51.157 16:09:34 app_cmdline -- common/autotest_common.sh@972 -- # wait 62068 00:05:51.415 ************************************ 00:05:51.415 END TEST app_cmdline 00:05:51.415 ************************************ 00:05:51.415 00:05:51.415 real 0m1.270s 00:05:51.415 user 0m1.700s 00:05:51.415 sys 0m0.288s 00:05:51.415 16:09:35 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.415 16:09:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.415 16:09:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.415 16:09:35 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:51.415 16:09:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.415 16:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.415 16:09:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.415 ************************************ 00:05:51.415 START TEST version 00:05:51.415 ************************************ 00:05:51.415 16:09:35 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:51.674 * Looking for test storage... 00:05:51.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:51.674 16:09:35 version -- app/version.sh@17 -- # get_header_version major 00:05:51.674 16:09:35 version -- app/version.sh@14 -- # cut -f2 00:05:51.674 16:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.674 16:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.674 16:09:35 version -- app/version.sh@17 -- # major=24 00:05:51.674 16:09:35 version -- app/version.sh@18 -- # get_header_version minor 00:05:51.674 16:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.674 16:09:35 version -- app/version.sh@14 -- # cut -f2 00:05:51.674 16:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.675 16:09:35 version -- app/version.sh@18 -- # minor=9 00:05:51.675 16:09:35 version -- app/version.sh@19 -- # get_header_version patch 00:05:51.675 16:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.675 16:09:35 version -- app/version.sh@14 -- # cut -f2 00:05:51.675 16:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.675 16:09:35 version -- app/version.sh@19 -- # patch=0 00:05:51.675 16:09:35 version -- app/version.sh@20 -- # get_header_version suffix 00:05:51.675 16:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.675 16:09:35 version -- app/version.sh@14 -- # cut -f2 00:05:51.675 16:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.675 16:09:35 version -- app/version.sh@20 -- # suffix=-pre 00:05:51.675 16:09:35 version -- app/version.sh@22 -- # version=24.9 00:05:51.675 16:09:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:51.675 16:09:35 version -- app/version.sh@28 -- # version=24.9rc0 00:05:51.675 16:09:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:51.675 16:09:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:51.675 16:09:35 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:51.675 16:09:35 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:51.675 00:05:51.675 real 0m0.148s 00:05:51.675 user 0m0.077s 00:05:51.675 sys 0m0.102s 00:05:51.675 ************************************ 00:05:51.675 END TEST version 00:05:51.675 ************************************ 00:05:51.675 16:09:35 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.675 16:09:35 version -- common/autotest_common.sh@10 -- # set +x 00:05:51.675 16:09:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.675 16:09:35 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:51.675 16:09:35 -- spdk/autotest.sh@198 -- # uname -s 00:05:51.675 16:09:35 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:51.675 16:09:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:51.675 16:09:35 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:05:51.675 16:09:35 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:05:51.675 16:09:35 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:51.675 16:09:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.675 16:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.675 16:09:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.675 ************************************ 00:05:51.675 START TEST spdk_dd 00:05:51.675 ************************************ 00:05:51.675 16:09:35 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:51.675 * Looking for test storage... 00:05:51.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:51.675 16:09:35 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.675 16:09:35 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.675 16:09:35 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.675 16:09:35 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.675 16:09:35 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.675 16:09:35 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.675 16:09:35 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.675 16:09:35 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:51.675 16:09:35 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.675 16:09:35 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:52.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:52.244 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:52.244 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:52.244 16:09:35 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:52.244 16:09:35 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:52.244 16:09:35 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:52.244 16:09:35 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@139 -- # local lib so 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:52.244 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:52.245 * spdk_dd linked to liburing 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:52.245 16:09:35 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:52.245 16:09:35 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:52.246 16:09:35 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:52.246 16:09:35 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:52.246 16:09:35 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:05:52.246 16:09:35 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:05:52.246 16:09:35 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:05:52.246 16:09:35 spdk_dd -- dd/common.sh@157 -- # return 0 00:05:52.246 16:09:35 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:52.246 16:09:35 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:52.246 16:09:35 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.246 16:09:35 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.246 16:09:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:52.246 ************************************ 00:05:52.246 START TEST spdk_dd_basic_rw 00:05:52.246 ************************************ 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:52.246 * Looking for test storage... 00:05:52.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.246 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:52.247 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:52.247 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:52.247 16:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:52.508 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:52.508 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.509 ************************************ 00:05:52.509 START TEST dd_bs_lt_native_bs 00:05:52.509 ************************************ 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:52.509 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:52.509 { 00:05:52.509 "subsystems": [ 00:05:52.509 { 00:05:52.509 "subsystem": "bdev", 00:05:52.509 "config": [ 00:05:52.509 { 00:05:52.509 "params": { 00:05:52.509 "trtype": "pcie", 00:05:52.509 "traddr": "0000:00:10.0", 00:05:52.509 "name": "Nvme0" 00:05:52.509 }, 00:05:52.509 "method": "bdev_nvme_attach_controller" 00:05:52.509 }, 00:05:52.509 { 00:05:52.509 "method": "bdev_wait_for_examine" 00:05:52.509 } 00:05:52.509 ] 00:05:52.509 } 00:05:52.509 ] 00:05:52.509 } 00:05:52.509 [2024-07-12 16:09:36.209447] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:52.509 [2024-07-12 16:09:36.209554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62375 ] 00:05:52.768 [2024-07-12 16:09:36.349343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.768 [2024-07-12 16:09:36.421620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.768 [2024-07-12 16:09:36.454373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.027 [2024-07-12 16:09:36.548530] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:53.027 [2024-07-12 16:09:36.548623] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.027 [2024-07-12 16:09:36.626073] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:53.027 ************************************ 00:05:53.027 END TEST dd_bs_lt_native_bs 00:05:53.027 ************************************ 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.027 00:05:53.027 real 0m0.572s 00:05:53.027 user 0m0.408s 00:05:53.027 sys 0m0.114s 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.027 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.286 ************************************ 00:05:53.286 START TEST dd_rw 00:05:53.286 ************************************ 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:53.286 16:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.855 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:53.855 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:53.855 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.855 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.855 [2024-07-12 16:09:37.429344] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:53.855 [2024-07-12 16:09:37.429441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62412 ] 00:05:53.855 { 00:05:53.855 "subsystems": [ 00:05:53.855 { 00:05:53.855 "subsystem": "bdev", 00:05:53.855 "config": [ 00:05:53.855 { 00:05:53.855 "params": { 00:05:53.855 "trtype": "pcie", 00:05:53.855 "traddr": "0000:00:10.0", 00:05:53.855 "name": "Nvme0" 00:05:53.855 }, 00:05:53.855 "method": "bdev_nvme_attach_controller" 00:05:53.855 }, 00:05:53.855 { 00:05:53.855 "method": "bdev_wait_for_examine" 00:05:53.855 } 00:05:53.855 ] 00:05:53.855 } 00:05:53.855 ] 00:05:53.855 } 00:05:53.855 [2024-07-12 16:09:37.563062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.114 [2024-07-12 16:09:37.623364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.114 [2024-07-12 16:09:37.653563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.373  Copying: 60/60 [kB] (average 29 MBps) 00:05:54.373 00:05:54.373 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:54.373 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:54.373 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.373 16:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.373 [2024-07-12 16:09:37.942161] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:54.373 [2024-07-12 16:09:37.942270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62427 ] 00:05:54.373 { 00:05:54.373 "subsystems": [ 00:05:54.373 { 00:05:54.373 "subsystem": "bdev", 00:05:54.373 "config": [ 00:05:54.373 { 00:05:54.373 "params": { 00:05:54.373 "trtype": "pcie", 00:05:54.373 "traddr": "0000:00:10.0", 00:05:54.373 "name": "Nvme0" 00:05:54.373 }, 00:05:54.373 "method": "bdev_nvme_attach_controller" 00:05:54.373 }, 00:05:54.373 { 00:05:54.373 "method": "bdev_wait_for_examine" 00:05:54.373 } 00:05:54.373 ] 00:05:54.373 } 00:05:54.373 ] 00:05:54.373 } 00:05:54.373 [2024-07-12 16:09:38.081807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.632 [2024-07-12 16:09:38.134355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.632 [2024-07-12 16:09:38.160426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.891  Copying: 60/60 [kB] (average 14 MBps) 00:05:54.891 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.891 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.891 [2024-07-12 16:09:38.457478] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:54.891 [2024-07-12 16:09:38.457568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62443 ] 00:05:54.891 { 00:05:54.891 "subsystems": [ 00:05:54.891 { 00:05:54.891 "subsystem": "bdev", 00:05:54.891 "config": [ 00:05:54.891 { 00:05:54.891 "params": { 00:05:54.891 "trtype": "pcie", 00:05:54.891 "traddr": "0000:00:10.0", 00:05:54.891 "name": "Nvme0" 00:05:54.891 }, 00:05:54.891 "method": "bdev_nvme_attach_controller" 00:05:54.891 }, 00:05:54.891 { 00:05:54.891 "method": "bdev_wait_for_examine" 00:05:54.891 } 00:05:54.891 ] 00:05:54.891 } 00:05:54.891 ] 00:05:54.891 } 00:05:54.891 [2024-07-12 16:09:38.595788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.150 [2024-07-12 16:09:38.653194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.150 [2024-07-12 16:09:38.679786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.409  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:55.409 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:55.409 16:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.977 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:55.977 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:55.977 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.977 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.977 [2024-07-12 16:09:39.502383] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:55.977 [2024-07-12 16:09:39.502939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:05:55.977 { 00:05:55.977 "subsystems": [ 00:05:55.977 { 00:05:55.977 "subsystem": "bdev", 00:05:55.977 "config": [ 00:05:55.977 { 00:05:55.977 "params": { 00:05:55.977 "trtype": "pcie", 00:05:55.977 "traddr": "0000:00:10.0", 00:05:55.977 "name": "Nvme0" 00:05:55.977 }, 00:05:55.977 "method": "bdev_nvme_attach_controller" 00:05:55.977 }, 00:05:55.977 { 00:05:55.977 "method": "bdev_wait_for_examine" 00:05:55.977 } 00:05:55.977 ] 00:05:55.977 } 00:05:55.977 ] 00:05:55.977 } 00:05:55.977 [2024-07-12 16:09:39.638461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.977 [2024-07-12 16:09:39.687307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.236 [2024-07-12 16:09:39.715151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.236  Copying: 60/60 [kB] (average 58 MBps) 00:05:56.236 00:05:56.236 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:56.236 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:56.236 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.236 16:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.494 { 00:05:56.494 "subsystems": [ 00:05:56.494 { 00:05:56.494 "subsystem": "bdev", 00:05:56.494 "config": [ 00:05:56.494 { 00:05:56.494 "params": { 00:05:56.494 "trtype": "pcie", 00:05:56.494 "traddr": "0000:00:10.0", 00:05:56.494 "name": "Nvme0" 00:05:56.494 }, 00:05:56.494 "method": "bdev_nvme_attach_controller" 00:05:56.494 }, 00:05:56.494 { 00:05:56.494 "method": "bdev_wait_for_examine" 00:05:56.494 } 00:05:56.494 ] 00:05:56.494 } 00:05:56.494 ] 00:05:56.494 } 00:05:56.494 [2024-07-12 16:09:40.000499] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:56.494 [2024-07-12 16:09:40.000593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62476 ] 00:05:56.494 [2024-07-12 16:09:40.137993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.494 [2024-07-12 16:09:40.188667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.494 [2024-07-12 16:09:40.215723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.752  Copying: 60/60 [kB] (average 58 MBps) 00:05:56.752 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.752 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.011 [2024-07-12 16:09:40.502408] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:57.011 [2024-07-12 16:09:40.502934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62492 ] 00:05:57.011 { 00:05:57.011 "subsystems": [ 00:05:57.011 { 00:05:57.011 "subsystem": "bdev", 00:05:57.011 "config": [ 00:05:57.011 { 00:05:57.011 "params": { 00:05:57.011 "trtype": "pcie", 00:05:57.011 "traddr": "0000:00:10.0", 00:05:57.011 "name": "Nvme0" 00:05:57.011 }, 00:05:57.011 "method": "bdev_nvme_attach_controller" 00:05:57.011 }, 00:05:57.011 { 00:05:57.011 "method": "bdev_wait_for_examine" 00:05:57.011 } 00:05:57.011 ] 00:05:57.011 } 00:05:57.011 ] 00:05:57.011 } 00:05:57.011 [2024-07-12 16:09:40.640199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.011 [2024-07-12 16:09:40.687971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.011 [2024-07-12 16:09:40.714362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.270  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:57.270 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:57.270 16:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.837 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:57.837 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:57.837 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.837 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.837 [2024-07-12 16:09:41.538396] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:57.837 [2024-07-12 16:09:41.538485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62511 ] 00:05:57.837 { 00:05:57.837 "subsystems": [ 00:05:57.837 { 00:05:57.837 "subsystem": "bdev", 00:05:57.837 "config": [ 00:05:57.837 { 00:05:57.837 "params": { 00:05:57.838 "trtype": "pcie", 00:05:57.838 "traddr": "0000:00:10.0", 00:05:57.838 "name": "Nvme0" 00:05:57.838 }, 00:05:57.838 "method": "bdev_nvme_attach_controller" 00:05:57.838 }, 00:05:57.838 { 00:05:57.838 "method": "bdev_wait_for_examine" 00:05:57.838 } 00:05:57.838 ] 00:05:57.838 } 00:05:57.838 ] 00:05:57.838 } 00:05:58.097 [2024-07-12 16:09:41.675760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.097 [2024-07-12 16:09:41.726117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.097 [2024-07-12 16:09:41.752027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.355  Copying: 56/56 [kB] (average 54 MBps) 00:05:58.355 00:05:58.355 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:58.355 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:58.355 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.355 16:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.355 [2024-07-12 16:09:42.026615] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:58.355 [2024-07-12 16:09:42.026730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62524 ] 00:05:58.355 { 00:05:58.355 "subsystems": [ 00:05:58.355 { 00:05:58.355 "subsystem": "bdev", 00:05:58.355 "config": [ 00:05:58.355 { 00:05:58.355 "params": { 00:05:58.355 "trtype": "pcie", 00:05:58.355 "traddr": "0000:00:10.0", 00:05:58.355 "name": "Nvme0" 00:05:58.355 }, 00:05:58.355 "method": "bdev_nvme_attach_controller" 00:05:58.355 }, 00:05:58.355 { 00:05:58.355 "method": "bdev_wait_for_examine" 00:05:58.355 } 00:05:58.355 ] 00:05:58.355 } 00:05:58.355 ] 00:05:58.355 } 00:05:58.613 [2024-07-12 16:09:42.163214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.613 [2024-07-12 16:09:42.213145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.613 [2024-07-12 16:09:42.239908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.872  Copying: 56/56 [kB] (average 27 MBps) 00:05:58.872 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:58.872 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:58.873 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:58.873 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:58.873 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.873 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.873 [2024-07-12 16:09:42.517750] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:58.873 [2024-07-12 16:09:42.517878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62540 ] 00:05:58.873 { 00:05:58.873 "subsystems": [ 00:05:58.873 { 00:05:58.873 "subsystem": "bdev", 00:05:58.873 "config": [ 00:05:58.873 { 00:05:58.873 "params": { 00:05:58.873 "trtype": "pcie", 00:05:58.873 "traddr": "0000:00:10.0", 00:05:58.873 "name": "Nvme0" 00:05:58.873 }, 00:05:58.873 "method": "bdev_nvme_attach_controller" 00:05:58.873 }, 00:05:58.873 { 00:05:58.873 "method": "bdev_wait_for_examine" 00:05:58.873 } 00:05:58.873 ] 00:05:58.873 } 00:05:58.873 ] 00:05:58.873 } 00:05:59.132 [2024-07-12 16:09:42.654356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.132 [2024-07-12 16:09:42.710624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.132 [2024-07-12 16:09:42.737945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.389  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:59.389 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:59.389 16:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.954 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:59.954 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:59.955 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.955 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.955 [2024-07-12 16:09:43.538758] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:05:59.955 [2024-07-12 16:09:43.538896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:05:59.955 { 00:05:59.955 "subsystems": [ 00:05:59.955 { 00:05:59.955 "subsystem": "bdev", 00:05:59.955 "config": [ 00:05:59.955 { 00:05:59.955 "params": { 00:05:59.955 "trtype": "pcie", 00:05:59.955 "traddr": "0000:00:10.0", 00:05:59.955 "name": "Nvme0" 00:05:59.955 }, 00:05:59.955 "method": "bdev_nvme_attach_controller" 00:05:59.955 }, 00:05:59.955 { 00:05:59.955 "method": "bdev_wait_for_examine" 00:05:59.955 } 00:05:59.955 ] 00:05:59.955 } 00:05:59.955 ] 00:05:59.955 } 00:05:59.955 [2024-07-12 16:09:43.674997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.218 [2024-07-12 16:09:43.728193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.218 [2024-07-12 16:09:43.757716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.497  Copying: 56/56 [kB] (average 54 MBps) 00:06:00.497 00:06:00.497 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:00.497 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:00.497 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.497 16:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.497 [2024-07-12 16:09:44.034467] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:00.497 [2024-07-12 16:09:44.034566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62572 ] 00:06:00.497 { 00:06:00.497 "subsystems": [ 00:06:00.497 { 00:06:00.497 "subsystem": "bdev", 00:06:00.497 "config": [ 00:06:00.497 { 00:06:00.497 "params": { 00:06:00.497 "trtype": "pcie", 00:06:00.497 "traddr": "0000:00:10.0", 00:06:00.497 "name": "Nvme0" 00:06:00.497 }, 00:06:00.497 "method": "bdev_nvme_attach_controller" 00:06:00.497 }, 00:06:00.497 { 00:06:00.497 "method": "bdev_wait_for_examine" 00:06:00.497 } 00:06:00.497 ] 00:06:00.497 } 00:06:00.497 ] 00:06:00.497 } 00:06:00.497 [2024-07-12 16:09:44.171203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.777 [2024-07-12 16:09:44.232698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.777 [2024-07-12 16:09:44.263609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.777  Copying: 56/56 [kB] (average 54 MBps) 00:06:00.777 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.777 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.036 { 00:06:01.036 "subsystems": [ 00:06:01.036 { 00:06:01.036 "subsystem": "bdev", 00:06:01.036 "config": [ 00:06:01.036 { 00:06:01.036 "params": { 00:06:01.036 "trtype": "pcie", 00:06:01.036 "traddr": "0000:00:10.0", 00:06:01.036 "name": "Nvme0" 00:06:01.036 }, 00:06:01.036 "method": "bdev_nvme_attach_controller" 00:06:01.036 }, 00:06:01.036 { 00:06:01.036 "method": "bdev_wait_for_examine" 00:06:01.036 } 00:06:01.036 ] 00:06:01.036 } 00:06:01.036 ] 00:06:01.036 } 00:06:01.036 [2024-07-12 16:09:44.553605] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:01.036 [2024-07-12 16:09:44.553697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62588 ] 00:06:01.036 [2024-07-12 16:09:44.690006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.036 [2024-07-12 16:09:44.736846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.295 [2024-07-12 16:09:44.763338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.295  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:01.295 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:01.295 16:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.863 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:01.863 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:01.863 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.863 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.863 { 00:06:01.863 "subsystems": [ 00:06:01.863 { 00:06:01.863 "subsystem": "bdev", 00:06:01.863 "config": [ 00:06:01.863 { 00:06:01.863 "params": { 00:06:01.863 "trtype": "pcie", 00:06:01.863 "traddr": "0000:00:10.0", 00:06:01.863 "name": "Nvme0" 00:06:01.863 }, 00:06:01.863 "method": "bdev_nvme_attach_controller" 00:06:01.863 }, 00:06:01.863 { 00:06:01.863 "method": "bdev_wait_for_examine" 00:06:01.863 } 00:06:01.863 ] 00:06:01.863 } 00:06:01.863 ] 00:06:01.863 } 00:06:01.863 [2024-07-12 16:09:45.509762] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:01.863 [2024-07-12 16:09:45.509919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62601 ] 00:06:02.123 [2024-07-12 16:09:45.647806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.123 [2024-07-12 16:09:45.697504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.123 [2024-07-12 16:09:45.723166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.382  Copying: 48/48 [kB] (average 46 MBps) 00:06:02.382 00:06:02.382 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:02.382 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:02.382 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.382 16:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.382 { 00:06:02.382 "subsystems": [ 00:06:02.382 { 00:06:02.382 "subsystem": "bdev", 00:06:02.382 "config": [ 00:06:02.382 { 00:06:02.382 "params": { 00:06:02.382 "trtype": "pcie", 00:06:02.382 "traddr": "0000:00:10.0", 00:06:02.382 "name": "Nvme0" 00:06:02.382 }, 00:06:02.382 "method": "bdev_nvme_attach_controller" 00:06:02.382 }, 00:06:02.382 { 00:06:02.382 "method": "bdev_wait_for_examine" 00:06:02.382 } 00:06:02.382 ] 00:06:02.382 } 00:06:02.382 ] 00:06:02.382 } 00:06:02.382 [2024-07-12 16:09:46.013661] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:02.382 [2024-07-12 16:09:46.013748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62622 ] 00:06:02.641 [2024-07-12 16:09:46.150055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.641 [2024-07-12 16:09:46.198199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.642 [2024-07-12 16:09:46.224464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.901  Copying: 48/48 [kB] (average 46 MBps) 00:06:02.901 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.901 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.901 [2024-07-12 16:09:46.513817] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:02.901 [2024-07-12 16:09:46.513965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62632 ] 00:06:02.901 { 00:06:02.901 "subsystems": [ 00:06:02.901 { 00:06:02.901 "subsystem": "bdev", 00:06:02.901 "config": [ 00:06:02.901 { 00:06:02.902 "params": { 00:06:02.902 "trtype": "pcie", 00:06:02.902 "traddr": "0000:00:10.0", 00:06:02.902 "name": "Nvme0" 00:06:02.902 }, 00:06:02.902 "method": "bdev_nvme_attach_controller" 00:06:02.902 }, 00:06:02.902 { 00:06:02.902 "method": "bdev_wait_for_examine" 00:06:02.902 } 00:06:02.902 ] 00:06:02.902 } 00:06:02.902 ] 00:06:02.902 } 00:06:03.161 [2024-07-12 16:09:46.649791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.161 [2024-07-12 16:09:46.696996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.161 [2024-07-12 16:09:46.723182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.420  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:03.420 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:03.420 16:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.987 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:03.987 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:03.987 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.987 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.987 [2024-07-12 16:09:47.486958] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:03.987 [2024-07-12 16:09:47.487230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62651 ] 00:06:03.987 { 00:06:03.987 "subsystems": [ 00:06:03.987 { 00:06:03.987 "subsystem": "bdev", 00:06:03.987 "config": [ 00:06:03.987 { 00:06:03.987 "params": { 00:06:03.987 "trtype": "pcie", 00:06:03.987 "traddr": "0000:00:10.0", 00:06:03.987 "name": "Nvme0" 00:06:03.987 }, 00:06:03.987 "method": "bdev_nvme_attach_controller" 00:06:03.987 }, 00:06:03.987 { 00:06:03.987 "method": "bdev_wait_for_examine" 00:06:03.987 } 00:06:03.987 ] 00:06:03.987 } 00:06:03.987 ] 00:06:03.987 } 00:06:03.987 [2024-07-12 16:09:47.625855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.987 [2024-07-12 16:09:47.673607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.987 [2024-07-12 16:09:47.700044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.244  Copying: 48/48 [kB] (average 46 MBps) 00:06:04.244 00:06:04.244 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:04.244 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.244 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.244 16:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.502 [2024-07-12 16:09:47.972451] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:04.502 [2024-07-12 16:09:47.972539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62665 ] 00:06:04.502 { 00:06:04.502 "subsystems": [ 00:06:04.502 { 00:06:04.502 "subsystem": "bdev", 00:06:04.502 "config": [ 00:06:04.502 { 00:06:04.502 "params": { 00:06:04.502 "trtype": "pcie", 00:06:04.502 "traddr": "0000:00:10.0", 00:06:04.502 "name": "Nvme0" 00:06:04.502 }, 00:06:04.502 "method": "bdev_nvme_attach_controller" 00:06:04.502 }, 00:06:04.502 { 00:06:04.502 "method": "bdev_wait_for_examine" 00:06:04.502 } 00:06:04.502 ] 00:06:04.502 } 00:06:04.502 ] 00:06:04.502 } 00:06:04.502 [2024-07-12 16:09:48.110357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.502 [2024-07-12 16:09:48.158188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.502 [2024-07-12 16:09:48.184350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.760  Copying: 48/48 [kB] (average 46 MBps) 00:06:04.760 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.760 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.760 [2024-07-12 16:09:48.485309] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:04.760 [2024-07-12 16:09:48.486008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62680 ] 00:06:05.027 { 00:06:05.027 "subsystems": [ 00:06:05.027 { 00:06:05.027 "subsystem": "bdev", 00:06:05.027 "config": [ 00:06:05.027 { 00:06:05.027 "params": { 00:06:05.027 "trtype": "pcie", 00:06:05.027 "traddr": "0000:00:10.0", 00:06:05.027 "name": "Nvme0" 00:06:05.027 }, 00:06:05.027 "method": "bdev_nvme_attach_controller" 00:06:05.027 }, 00:06:05.027 { 00:06:05.027 "method": "bdev_wait_for_examine" 00:06:05.027 } 00:06:05.027 ] 00:06:05.027 } 00:06:05.027 ] 00:06:05.027 } 00:06:05.027 [2024-07-12 16:09:48.623534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.027 [2024-07-12 16:09:48.676248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.027 [2024-07-12 16:09:48.702991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.285  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:05.285 00:06:05.285 00:06:05.285 real 0m12.144s 00:06:05.285 user 0m9.200s 00:06:05.285 sys 0m3.436s 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.285 ************************************ 00:06:05.285 END TEST dd_rw 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.285 ************************************ 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.285 ************************************ 00:06:05.285 START TEST dd_rw_offset 00:06:05.285 ************************************ 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:05.285 16:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:05.544 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:05.544 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=dr77j884pda7xvdn5mkozgd1qktfec4fcws4ktje5fdk7ihgrjj8quzc1f2xjpra2237fgv3ls6ulwq3u5itl38ppxu6vcdzbmebfmer9tic4ryze7yfq6snfm31fu2u5sn9s6oebentwti0ixc9u4yugy7ulz74ohjowp2ksul3in2irjkp6lyiiy4rvjnyv12r6b2pa3nn9uooeaud5gfjo5tijhebjwhwzeoidkb2cqsp4sqr4wscu8oqcmx3gczr09kcndcp27dj6wyk2anieeyeo7zyhs5ivclc0dj9njhv68fu04shlfxzdg45plba6l1yceap2z2izs8z9l922ywufzkawq51owqc8028pqcgdi1umjfffaqiu1l0uznmjji23l4msm9ugzrnwtjxib8cyfqwediru7x6wnfsc8aknkm1uqdxwb18ejb4dnxhnwufdw3fop6nbkj48itd9c18aaex8w8ffcsr1cuqocu8oksnagzaevxelfd8912q85mtbynio757d0973y0zt5oxokaao4qp41omws5k1rmdhlk077yrf2pl2964hckqyxrl2q5orisy742pr33y7dwej8ml6hm6hssb2pmyt9si0u2rtknel5tirfhl5h97s94w2zo54ql1cam0sa1ws8qlcqjmcfqevu4nrbbetsi83so0vvfbvnlac2y5pqbdk1g5g4nncg9vvd5at1jbg38tqaw40zrdmrsm2lvw9zsfadu23omg1c4b5vcx1362c4l6v6jh5oon9s1hbrdc7r754rbopndbz8y1zjxe7ewy8jcnvmq5z2z7wtk3nb459utzokhp68o5l39tv46zv4x7kfpcbmx7bpxgtqtq7oa0nprwqiezsz8bj5w9ndhajduxqi94uej6uodcsj74ycemicdoda0wjsp2g2busy14s7qlii3z88mcxloo4bfnsnum94cukjf52joo2robvqjvm6rhuk2l580rcoj5sl427vhmsz0p8lwiofpy200k6g16e0oe9png1pcxxbyoxb4budmnpx6xgndpm9s00khheajgm1gvy4ak5nxextnsx4w1eiik04ljq64crq4jkd236o1xf4hd19oqus3adgxrdm85za4tbqkw8bqs6z3ec10784116bj3nc1zx3ymon4kevxjnyodb3iea02rwe3n5gq952awrdght0qymtemuppi0gblxu7c4j5bm2lxl2veh3dmd0jdueggn39u8waufyfvrqn1hftxmq594oyt36rt11v38yzrqza960to7gmd28w3qnv2xdwnqbfm4yulqzflsnhpv7jkjtsvqsa3yg6dmng3ojlezfyye5qwghpsen7d6fhin7vte2tt025ik7yklz9t5271xd0ozbayyo9m7kz0jkjxlsfffboidgemtzdhsno0kos3dd1dcvfzjmh0h4ciyni7qvubr40g45b6vwgiac93yc7qxnvi1wz7son0zqy5ghpefooib331tzp8z46i94vt4b55ecsmk1ri1cl08htqbszo2izsxtcr5iztuxqu26b4hirusior4rmaqlk63nzuwngrgbhmnokgwd2iyef633jgtrv89k053kshzaw8z57ixkdsf7nnp4vmsbu0xz8eku18fzc1tgc5bwv3yaqkzetek6ucyv91uy6k13s7vveyl0s86fx1xkm5dfjjs16tx7qcv1qm0vbobsw4xu3p6l8tpgi2jgn3szk0lp7bc2xcjeo5aah512cz0x6q0ndrzlmm3pnm2vyu5aiuzx1t694wbxdcywysptnvg35ekvxnzsy4xo6ric2r89etrjdp37w4jdur2amkxksvl5p4u9jrx9cpta9x1fa66u4au4a3gnu7z7lawgshpxnr0b2gkh3ld601vw1o7erxmul5y41aswuxatx5o7yuf6zb364rvg1gtdsx132ih28ef722ok5cs73t7au8w25wy1nv1dgkjntj6xkn2xr8rc2lajgub6tn3xcf99wxbtpzofqdwg4f80nlumdrp8bfmvwubuq4bm91gtotae52y4cg0bat6bruwmn4os6xw3h9t0xjwjloqsxh4357o6ajj77myhphnafa7n5rt2uiuvauk4twj1o1y7jd1nwelgvnzne9ia2al1vu5x4xpake8lbt1eyxlp9jq27o93bsihjexgb1h3qzmns7avcsr5uaie2z0tut2nergf8ldrre4j3a40mvdc4g2qe6d6723wii87m4k3noxfgoiqrylocm2gtpwb6sgscerfl69xtyddx6mkosd9f9zv5qppov0r480wdan78vecsa8zsb5oozmjkebuxkxjzpiku8gfn6oanfqytbg3a7f8v8gj0phaudtqo2y8tp5oag5u040lfovvzx9u9ee5upn8bq0my4ogjyp9zz3es1q76swxieqtjq4icahu2iy285mxm5r71b0c3s4o1s22przkccxu6zu5ps0t1t1he2jsik5lhhzdulxozshrb4cvxyzm5mvb4ivp178egt3wunjjc6qmveop5b5f2oskv31z6pfvi8qb9ty4zd9oq3vnaonyxo40nhr05ls0ap43wdtdsiz9md0ps2m0mpsz0ino3o1mjcutj13o5c5s4mrpaf7rwr38ijfalqg8am1xw8r85tde4sf7kiv507egbd9sfkrv5yayewer32hk2m0g6ms9y40jgaxgsgdsw98igkymhodkic0f2rkj69w2i8i8y8ao7b7mxgjlepsqs6bu8fogt3g63z0863kjea4fn3au3gkq1qcr33l4n15y933ffha9l5ejzkiu5vlvjb55tgwuf9dlukrop3yh7nig83mvble7i3gb5oudq6v1536bt6fgg04uiyuaod4xauq7jtwrtnd0ocnqddqfq3rsbnx852r0bm4ocb76722u5fxwln8bpoinvuspseavf1mm0p5a4e2ffiu379cyfc9c0w54g78z5ypwx6a7xvt4g5dmgvd9qt9kervqy1lnyh3o6g19fpzocv9z3x9ei1ymw5s4gegqh1b5mv5aog0rzd1zwoghcrmohd1n1x1bf3zbjvt2l75ecvl79a91793hb89egwro4xs9s9q6mzzwv9wwjfy86d74ejg62ko8byxjsbzww4p5y0nie45z3a1vwqb4aknh9hfiqd8334pehooepog83e7sczymopjza7qncnioblo9yipx198nxnczf5zymsc0z2y6mkyhc4jv0a22q7ca8lc669wugu0uvo3g2jthfpp1u41fnt3flk9zwobx7zeety22pzgsc5adpb1kc2eujcsp8h83c7tonlwpwurbty29fg2le5apv07f3750ivyhpc3fka84s1qfjql213wp01fyo1bcwv9b6ghwkupa75w45h0o4d6df1ot8brrf6bl909oz4rsvralpgiunv0lruc0gnc0vrjoz05dg50opm8ls73tp0hi4fakxff1obeix6kxxdr0417mcif9mro6vurf89wiv705b5e798zdogkxppk713c4iaqfmgs6dahtxklxbo5fnct21bfxgm3eyojw07wirtbdff2tgijmfsm5cvydrxhx86s2ebaghi984uxqinscfe1b3tg9e05ydbpa4prynvj2za7qn7kedmzao71bafktgfht97vuxz677bvv4x91fjebyoz1rw3d9pil9mkw9b5tl31pps9zapve9mi9e68rfilaar0qzjeil6on6dpj9rm7zd6tk31zwxvzxglqb4dkba8hfl7ox6cpzv0pxwshwxej00js7i1eowj7z9ljcuquszgdkxytt6kukxhe7jve8d51s36tt1gl9rgxjdrepbzgfg3ois60u9s9b39yhs02yxx52jcod1toxi7ns67xzb7wjnt1qg93eevbavvnjlc31vqji3zsw07xu70cd3n9vjw4e91ra4buw2vtpi3e5n62g8bq720zze9chfj8r5xnwk81l8thkdj3g6g425lt7ynqeabacj35mf0c5hndf3ip5bde37oq1qgnq7s2pe 00:06:05.544 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:05.544 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:05.544 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:05.544 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:05.544 [2024-07-12 16:09:49.073561] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:05.544 [2024-07-12 16:09:49.073658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62715 ] 00:06:05.544 { 00:06:05.544 "subsystems": [ 00:06:05.544 { 00:06:05.544 "subsystem": "bdev", 00:06:05.544 "config": [ 00:06:05.544 { 00:06:05.544 "params": { 00:06:05.544 "trtype": "pcie", 00:06:05.544 "traddr": "0000:00:10.0", 00:06:05.544 "name": "Nvme0" 00:06:05.544 }, 00:06:05.544 "method": "bdev_nvme_attach_controller" 00:06:05.544 }, 00:06:05.544 { 00:06:05.544 "method": "bdev_wait_for_examine" 00:06:05.544 } 00:06:05.544 ] 00:06:05.544 } 00:06:05.544 ] 00:06:05.544 } 00:06:05.544 [2024-07-12 16:09:49.203236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.544 [2024-07-12 16:09:49.251337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.802 [2024-07-12 16:09:49.278386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.802  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:05.802 00:06:05.803 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:05.803 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:05.803 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:05.803 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.061 [2024-07-12 16:09:49.544728] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:06.061 [2024-07-12 16:09:49.544826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62724 ] 00:06:06.061 { 00:06:06.061 "subsystems": [ 00:06:06.061 { 00:06:06.061 "subsystem": "bdev", 00:06:06.061 "config": [ 00:06:06.061 { 00:06:06.061 "params": { 00:06:06.061 "trtype": "pcie", 00:06:06.061 "traddr": "0000:00:10.0", 00:06:06.061 "name": "Nvme0" 00:06:06.061 }, 00:06:06.061 "method": "bdev_nvme_attach_controller" 00:06:06.061 }, 00:06:06.061 { 00:06:06.061 "method": "bdev_wait_for_examine" 00:06:06.061 } 00:06:06.061 ] 00:06:06.061 } 00:06:06.061 ] 00:06:06.061 } 00:06:06.061 [2024-07-12 16:09:49.675012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.061 [2024-07-12 16:09:49.724531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.061 [2024-07-12 16:09:49.751338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.320  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:06.320 00:06:06.320 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:06.320 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ dr77j884pda7xvdn5mkozgd1qktfec4fcws4ktje5fdk7ihgrjj8quzc1f2xjpra2237fgv3ls6ulwq3u5itl38ppxu6vcdzbmebfmer9tic4ryze7yfq6snfm31fu2u5sn9s6oebentwti0ixc9u4yugy7ulz74ohjowp2ksul3in2irjkp6lyiiy4rvjnyv12r6b2pa3nn9uooeaud5gfjo5tijhebjwhwzeoidkb2cqsp4sqr4wscu8oqcmx3gczr09kcndcp27dj6wyk2anieeyeo7zyhs5ivclc0dj9njhv68fu04shlfxzdg45plba6l1yceap2z2izs8z9l922ywufzkawq51owqc8028pqcgdi1umjfffaqiu1l0uznmjji23l4msm9ugzrnwtjxib8cyfqwediru7x6wnfsc8aknkm1uqdxwb18ejb4dnxhnwufdw3fop6nbkj48itd9c18aaex8w8ffcsr1cuqocu8oksnagzaevxelfd8912q85mtbynio757d0973y0zt5oxokaao4qp41omws5k1rmdhlk077yrf2pl2964hckqyxrl2q5orisy742pr33y7dwej8ml6hm6hssb2pmyt9si0u2rtknel5tirfhl5h97s94w2zo54ql1cam0sa1ws8qlcqjmcfqevu4nrbbetsi83so0vvfbvnlac2y5pqbdk1g5g4nncg9vvd5at1jbg38tqaw40zrdmrsm2lvw9zsfadu23omg1c4b5vcx1362c4l6v6jh5oon9s1hbrdc7r754rbopndbz8y1zjxe7ewy8jcnvmq5z2z7wtk3nb459utzokhp68o5l39tv46zv4x7kfpcbmx7bpxgtqtq7oa0nprwqiezsz8bj5w9ndhajduxqi94uej6uodcsj74ycemicdoda0wjsp2g2busy14s7qlii3z88mcxloo4bfnsnum94cukjf52joo2robvqjvm6rhuk2l580rcoj5sl427vhmsz0p8lwiofpy200k6g16e0oe9png1pcxxbyoxb4budmnpx6xgndpm9s00khheajgm1gvy4ak5nxextnsx4w1eiik04ljq64crq4jkd236o1xf4hd19oqus3adgxrdm85za4tbqkw8bqs6z3ec10784116bj3nc1zx3ymon4kevxjnyodb3iea02rwe3n5gq952awrdght0qymtemuppi0gblxu7c4j5bm2lxl2veh3dmd0jdueggn39u8waufyfvrqn1hftxmq594oyt36rt11v38yzrqza960to7gmd28w3qnv2xdwnqbfm4yulqzflsnhpv7jkjtsvqsa3yg6dmng3ojlezfyye5qwghpsen7d6fhin7vte2tt025ik7yklz9t5271xd0ozbayyo9m7kz0jkjxlsfffboidgemtzdhsno0kos3dd1dcvfzjmh0h4ciyni7qvubr40g45b6vwgiac93yc7qxnvi1wz7son0zqy5ghpefooib331tzp8z46i94vt4b55ecsmk1ri1cl08htqbszo2izsxtcr5iztuxqu26b4hirusior4rmaqlk63nzuwngrgbhmnokgwd2iyef633jgtrv89k053kshzaw8z57ixkdsf7nnp4vmsbu0xz8eku18fzc1tgc5bwv3yaqkzetek6ucyv91uy6k13s7vveyl0s86fx1xkm5dfjjs16tx7qcv1qm0vbobsw4xu3p6l8tpgi2jgn3szk0lp7bc2xcjeo5aah512cz0x6q0ndrzlmm3pnm2vyu5aiuzx1t694wbxdcywysptnvg35ekvxnzsy4xo6ric2r89etrjdp37w4jdur2amkxksvl5p4u9jrx9cpta9x1fa66u4au4a3gnu7z7lawgshpxnr0b2gkh3ld601vw1o7erxmul5y41aswuxatx5o7yuf6zb364rvg1gtdsx132ih28ef722ok5cs73t7au8w25wy1nv1dgkjntj6xkn2xr8rc2lajgub6tn3xcf99wxbtpzofqdwg4f80nlumdrp8bfmvwubuq4bm91gtotae52y4cg0bat6bruwmn4os6xw3h9t0xjwjloqsxh4357o6ajj77myhphnafa7n5rt2uiuvauk4twj1o1y7jd1nwelgvnzne9ia2al1vu5x4xpake8lbt1eyxlp9jq27o93bsihjexgb1h3qzmns7avcsr5uaie2z0tut2nergf8ldrre4j3a40mvdc4g2qe6d6723wii87m4k3noxfgoiqrylocm2gtpwb6sgscerfl69xtyddx6mkosd9f9zv5qppov0r480wdan78vecsa8zsb5oozmjkebuxkxjzpiku8gfn6oanfqytbg3a7f8v8gj0phaudtqo2y8tp5oag5u040lfovvzx9u9ee5upn8bq0my4ogjyp9zz3es1q76swxieqtjq4icahu2iy285mxm5r71b0c3s4o1s22przkccxu6zu5ps0t1t1he2jsik5lhhzdulxozshrb4cvxyzm5mvb4ivp178egt3wunjjc6qmveop5b5f2oskv31z6pfvi8qb9ty4zd9oq3vnaonyxo40nhr05ls0ap43wdtdsiz9md0ps2m0mpsz0ino3o1mjcutj13o5c5s4mrpaf7rwr38ijfalqg8am1xw8r85tde4sf7kiv507egbd9sfkrv5yayewer32hk2m0g6ms9y40jgaxgsgdsw98igkymhodkic0f2rkj69w2i8i8y8ao7b7mxgjlepsqs6bu8fogt3g63z0863kjea4fn3au3gkq1qcr33l4n15y933ffha9l5ejzkiu5vlvjb55tgwuf9dlukrop3yh7nig83mvble7i3gb5oudq6v1536bt6fgg04uiyuaod4xauq7jtwrtnd0ocnqddqfq3rsbnx852r0bm4ocb76722u5fxwln8bpoinvuspseavf1mm0p5a4e2ffiu379cyfc9c0w54g78z5ypwx6a7xvt4g5dmgvd9qt9kervqy1lnyh3o6g19fpzocv9z3x9ei1ymw5s4gegqh1b5mv5aog0rzd1zwoghcrmohd1n1x1bf3zbjvt2l75ecvl79a91793hb89egwro4xs9s9q6mzzwv9wwjfy86d74ejg62ko8byxjsbzww4p5y0nie45z3a1vwqb4aknh9hfiqd8334pehooepog83e7sczymopjza7qncnioblo9yipx198nxnczf5zymsc0z2y6mkyhc4jv0a22q7ca8lc669wugu0uvo3g2jthfpp1u41fnt3flk9zwobx7zeety22pzgsc5adpb1kc2eujcsp8h83c7tonlwpwurbty29fg2le5apv07f3750ivyhpc3fka84s1qfjql213wp01fyo1bcwv9b6ghwkupa75w45h0o4d6df1ot8brrf6bl909oz4rsvralpgiunv0lruc0gnc0vrjoz05dg50opm8ls73tp0hi4fakxff1obeix6kxxdr0417mcif9mro6vurf89wiv705b5e798zdogkxppk713c4iaqfmgs6dahtxklxbo5fnct21bfxgm3eyojw07wirtbdff2tgijmfsm5cvydrxhx86s2ebaghi984uxqinscfe1b3tg9e05ydbpa4prynvj2za7qn7kedmzao71bafktgfht97vuxz677bvv4x91fjebyoz1rw3d9pil9mkw9b5tl31pps9zapve9mi9e68rfilaar0qzjeil6on6dpj9rm7zd6tk31zwxvzxglqb4dkba8hfl7ox6cpzv0pxwshwxej00js7i1eowj7z9ljcuquszgdkxytt6kukxhe7jve8d51s36tt1gl9rgxjdrepbzgfg3ois60u9s9b39yhs02yxx52jcod1toxi7ns67xzb7wjnt1qg93eevbavvnjlc31vqji3zsw07xu70cd3n9vjw4e91ra4buw2vtpi3e5n62g8bq720zze9chfj8r5xnwk81l8thkdj3g6g425lt7ynqeabacj35mf0c5hndf3ip5bde37oq1qgnq7s2pe == \d\r\7\7\j\8\8\4\p\d\a\7\x\v\d\n\5\m\k\o\z\g\d\1\q\k\t\f\e\c\4\f\c\w\s\4\k\t\j\e\5\f\d\k\7\i\h\g\r\j\j\8\q\u\z\c\1\f\2\x\j\p\r\a\2\2\3\7\f\g\v\3\l\s\6\u\l\w\q\3\u\5\i\t\l\3\8\p\p\x\u\6\v\c\d\z\b\m\e\b\f\m\e\r\9\t\i\c\4\r\y\z\e\7\y\f\q\6\s\n\f\m\3\1\f\u\2\u\5\s\n\9\s\6\o\e\b\e\n\t\w\t\i\0\i\x\c\9\u\4\y\u\g\y\7\u\l\z\7\4\o\h\j\o\w\p\2\k\s\u\l\3\i\n\2\i\r\j\k\p\6\l\y\i\i\y\4\r\v\j\n\y\v\1\2\r\6\b\2\p\a\3\n\n\9\u\o\o\e\a\u\d\5\g\f\j\o\5\t\i\j\h\e\b\j\w\h\w\z\e\o\i\d\k\b\2\c\q\s\p\4\s\q\r\4\w\s\c\u\8\o\q\c\m\x\3\g\c\z\r\0\9\k\c\n\d\c\p\2\7\d\j\6\w\y\k\2\a\n\i\e\e\y\e\o\7\z\y\h\s\5\i\v\c\l\c\0\d\j\9\n\j\h\v\6\8\f\u\0\4\s\h\l\f\x\z\d\g\4\5\p\l\b\a\6\l\1\y\c\e\a\p\2\z\2\i\z\s\8\z\9\l\9\2\2\y\w\u\f\z\k\a\w\q\5\1\o\w\q\c\8\0\2\8\p\q\c\g\d\i\1\u\m\j\f\f\f\a\q\i\u\1\l\0\u\z\n\m\j\j\i\2\3\l\4\m\s\m\9\u\g\z\r\n\w\t\j\x\i\b\8\c\y\f\q\w\e\d\i\r\u\7\x\6\w\n\f\s\c\8\a\k\n\k\m\1\u\q\d\x\w\b\1\8\e\j\b\4\d\n\x\h\n\w\u\f\d\w\3\f\o\p\6\n\b\k\j\4\8\i\t\d\9\c\1\8\a\a\e\x\8\w\8\f\f\c\s\r\1\c\u\q\o\c\u\8\o\k\s\n\a\g\z\a\e\v\x\e\l\f\d\8\9\1\2\q\8\5\m\t\b\y\n\i\o\7\5\7\d\0\9\7\3\y\0\z\t\5\o\x\o\k\a\a\o\4\q\p\4\1\o\m\w\s\5\k\1\r\m\d\h\l\k\0\7\7\y\r\f\2\p\l\2\9\6\4\h\c\k\q\y\x\r\l\2\q\5\o\r\i\s\y\7\4\2\p\r\3\3\y\7\d\w\e\j\8\m\l\6\h\m\6\h\s\s\b\2\p\m\y\t\9\s\i\0\u\2\r\t\k\n\e\l\5\t\i\r\f\h\l\5\h\9\7\s\9\4\w\2\z\o\5\4\q\l\1\c\a\m\0\s\a\1\w\s\8\q\l\c\q\j\m\c\f\q\e\v\u\4\n\r\b\b\e\t\s\i\8\3\s\o\0\v\v\f\b\v\n\l\a\c\2\y\5\p\q\b\d\k\1\g\5\g\4\n\n\c\g\9\v\v\d\5\a\t\1\j\b\g\3\8\t\q\a\w\4\0\z\r\d\m\r\s\m\2\l\v\w\9\z\s\f\a\d\u\2\3\o\m\g\1\c\4\b\5\v\c\x\1\3\6\2\c\4\l\6\v\6\j\h\5\o\o\n\9\s\1\h\b\r\d\c\7\r\7\5\4\r\b\o\p\n\d\b\z\8\y\1\z\j\x\e\7\e\w\y\8\j\c\n\v\m\q\5\z\2\z\7\w\t\k\3\n\b\4\5\9\u\t\z\o\k\h\p\6\8\o\5\l\3\9\t\v\4\6\z\v\4\x\7\k\f\p\c\b\m\x\7\b\p\x\g\t\q\t\q\7\o\a\0\n\p\r\w\q\i\e\z\s\z\8\b\j\5\w\9\n\d\h\a\j\d\u\x\q\i\9\4\u\e\j\6\u\o\d\c\s\j\7\4\y\c\e\m\i\c\d\o\d\a\0\w\j\s\p\2\g\2\b\u\s\y\1\4\s\7\q\l\i\i\3\z\8\8\m\c\x\l\o\o\4\b\f\n\s\n\u\m\9\4\c\u\k\j\f\5\2\j\o\o\2\r\o\b\v\q\j\v\m\6\r\h\u\k\2\l\5\8\0\r\c\o\j\5\s\l\4\2\7\v\h\m\s\z\0\p\8\l\w\i\o\f\p\y\2\0\0\k\6\g\1\6\e\0\o\e\9\p\n\g\1\p\c\x\x\b\y\o\x\b\4\b\u\d\m\n\p\x\6\x\g\n\d\p\m\9\s\0\0\k\h\h\e\a\j\g\m\1\g\v\y\4\a\k\5\n\x\e\x\t\n\s\x\4\w\1\e\i\i\k\0\4\l\j\q\6\4\c\r\q\4\j\k\d\2\3\6\o\1\x\f\4\h\d\1\9\o\q\u\s\3\a\d\g\x\r\d\m\8\5\z\a\4\t\b\q\k\w\8\b\q\s\6\z\3\e\c\1\0\7\8\4\1\1\6\b\j\3\n\c\1\z\x\3\y\m\o\n\4\k\e\v\x\j\n\y\o\d\b\3\i\e\a\0\2\r\w\e\3\n\5\g\q\9\5\2\a\w\r\d\g\h\t\0\q\y\m\t\e\m\u\p\p\i\0\g\b\l\x\u\7\c\4\j\5\b\m\2\l\x\l\2\v\e\h\3\d\m\d\0\j\d\u\e\g\g\n\3\9\u\8\w\a\u\f\y\f\v\r\q\n\1\h\f\t\x\m\q\5\9\4\o\y\t\3\6\r\t\1\1\v\3\8\y\z\r\q\z\a\9\6\0\t\o\7\g\m\d\2\8\w\3\q\n\v\2\x\d\w\n\q\b\f\m\4\y\u\l\q\z\f\l\s\n\h\p\v\7\j\k\j\t\s\v\q\s\a\3\y\g\6\d\m\n\g\3\o\j\l\e\z\f\y\y\e\5\q\w\g\h\p\s\e\n\7\d\6\f\h\i\n\7\v\t\e\2\t\t\0\2\5\i\k\7\y\k\l\z\9\t\5\2\7\1\x\d\0\o\z\b\a\y\y\o\9\m\7\k\z\0\j\k\j\x\l\s\f\f\f\b\o\i\d\g\e\m\t\z\d\h\s\n\o\0\k\o\s\3\d\d\1\d\c\v\f\z\j\m\h\0\h\4\c\i\y\n\i\7\q\v\u\b\r\4\0\g\4\5\b\6\v\w\g\i\a\c\9\3\y\c\7\q\x\n\v\i\1\w\z\7\s\o\n\0\z\q\y\5\g\h\p\e\f\o\o\i\b\3\3\1\t\z\p\8\z\4\6\i\9\4\v\t\4\b\5\5\e\c\s\m\k\1\r\i\1\c\l\0\8\h\t\q\b\s\z\o\2\i\z\s\x\t\c\r\5\i\z\t\u\x\q\u\2\6\b\4\h\i\r\u\s\i\o\r\4\r\m\a\q\l\k\6\3\n\z\u\w\n\g\r\g\b\h\m\n\o\k\g\w\d\2\i\y\e\f\6\3\3\j\g\t\r\v\8\9\k\0\5\3\k\s\h\z\a\w\8\z\5\7\i\x\k\d\s\f\7\n\n\p\4\v\m\s\b\u\0\x\z\8\e\k\u\1\8\f\z\c\1\t\g\c\5\b\w\v\3\y\a\q\k\z\e\t\e\k\6\u\c\y\v\9\1\u\y\6\k\1\3\s\7\v\v\e\y\l\0\s\8\6\f\x\1\x\k\m\5\d\f\j\j\s\1\6\t\x\7\q\c\v\1\q\m\0\v\b\o\b\s\w\4\x\u\3\p\6\l\8\t\p\g\i\2\j\g\n\3\s\z\k\0\l\p\7\b\c\2\x\c\j\e\o\5\a\a\h\5\1\2\c\z\0\x\6\q\0\n\d\r\z\l\m\m\3\p\n\m\2\v\y\u\5\a\i\u\z\x\1\t\6\9\4\w\b\x\d\c\y\w\y\s\p\t\n\v\g\3\5\e\k\v\x\n\z\s\y\4\x\o\6\r\i\c\2\r\8\9\e\t\r\j\d\p\3\7\w\4\j\d\u\r\2\a\m\k\x\k\s\v\l\5\p\4\u\9\j\r\x\9\c\p\t\a\9\x\1\f\a\6\6\u\4\a\u\4\a\3\g\n\u\7\z\7\l\a\w\g\s\h\p\x\n\r\0\b\2\g\k\h\3\l\d\6\0\1\v\w\1\o\7\e\r\x\m\u\l\5\y\4\1\a\s\w\u\x\a\t\x\5\o\7\y\u\f\6\z\b\3\6\4\r\v\g\1\g\t\d\s\x\1\3\2\i\h\2\8\e\f\7\2\2\o\k\5\c\s\7\3\t\7\a\u\8\w\2\5\w\y\1\n\v\1\d\g\k\j\n\t\j\6\x\k\n\2\x\r\8\r\c\2\l\a\j\g\u\b\6\t\n\3\x\c\f\9\9\w\x\b\t\p\z\o\f\q\d\w\g\4\f\8\0\n\l\u\m\d\r\p\8\b\f\m\v\w\u\b\u\q\4\b\m\9\1\g\t\o\t\a\e\5\2\y\4\c\g\0\b\a\t\6\b\r\u\w\m\n\4\o\s\6\x\w\3\h\9\t\0\x\j\w\j\l\o\q\s\x\h\4\3\5\7\o\6\a\j\j\7\7\m\y\h\p\h\n\a\f\a\7\n\5\r\t\2\u\i\u\v\a\u\k\4\t\w\j\1\o\1\y\7\j\d\1\n\w\e\l\g\v\n\z\n\e\9\i\a\2\a\l\1\v\u\5\x\4\x\p\a\k\e\8\l\b\t\1\e\y\x\l\p\9\j\q\2\7\o\9\3\b\s\i\h\j\e\x\g\b\1\h\3\q\z\m\n\s\7\a\v\c\s\r\5\u\a\i\e\2\z\0\t\u\t\2\n\e\r\g\f\8\l\d\r\r\e\4\j\3\a\4\0\m\v\d\c\4\g\2\q\e\6\d\6\7\2\3\w\i\i\8\7\m\4\k\3\n\o\x\f\g\o\i\q\r\y\l\o\c\m\2\g\t\p\w\b\6\s\g\s\c\e\r\f\l\6\9\x\t\y\d\d\x\6\m\k\o\s\d\9\f\9\z\v\5\q\p\p\o\v\0\r\4\8\0\w\d\a\n\7\8\v\e\c\s\a\8\z\s\b\5\o\o\z\m\j\k\e\b\u\x\k\x\j\z\p\i\k\u\8\g\f\n\6\o\a\n\f\q\y\t\b\g\3\a\7\f\8\v\8\g\j\0\p\h\a\u\d\t\q\o\2\y\8\t\p\5\o\a\g\5\u\0\4\0\l\f\o\v\v\z\x\9\u\9\e\e\5\u\p\n\8\b\q\0\m\y\4\o\g\j\y\p\9\z\z\3\e\s\1\q\7\6\s\w\x\i\e\q\t\j\q\4\i\c\a\h\u\2\i\y\2\8\5\m\x\m\5\r\7\1\b\0\c\3\s\4\o\1\s\2\2\p\r\z\k\c\c\x\u\6\z\u\5\p\s\0\t\1\t\1\h\e\2\j\s\i\k\5\l\h\h\z\d\u\l\x\o\z\s\h\r\b\4\c\v\x\y\z\m\5\m\v\b\4\i\v\p\1\7\8\e\g\t\3\w\u\n\j\j\c\6\q\m\v\e\o\p\5\b\5\f\2\o\s\k\v\3\1\z\6\p\f\v\i\8\q\b\9\t\y\4\z\d\9\o\q\3\v\n\a\o\n\y\x\o\4\0\n\h\r\0\5\l\s\0\a\p\4\3\w\d\t\d\s\i\z\9\m\d\0\p\s\2\m\0\m\p\s\z\0\i\n\o\3\o\1\m\j\c\u\t\j\1\3\o\5\c\5\s\4\m\r\p\a\f\7\r\w\r\3\8\i\j\f\a\l\q\g\8\a\m\1\x\w\8\r\8\5\t\d\e\4\s\f\7\k\i\v\5\0\7\e\g\b\d\9\s\f\k\r\v\5\y\a\y\e\w\e\r\3\2\h\k\2\m\0\g\6\m\s\9\y\4\0\j\g\a\x\g\s\g\d\s\w\9\8\i\g\k\y\m\h\o\d\k\i\c\0\f\2\r\k\j\6\9\w\2\i\8\i\8\y\8\a\o\7\b\7\m\x\g\j\l\e\p\s\q\s\6\b\u\8\f\o\g\t\3\g\6\3\z\0\8\6\3\k\j\e\a\4\f\n\3\a\u\3\g\k\q\1\q\c\r\3\3\l\4\n\1\5\y\9\3\3\f\f\h\a\9\l\5\e\j\z\k\i\u\5\v\l\v\j\b\5\5\t\g\w\u\f\9\d\l\u\k\r\o\p\3\y\h\7\n\i\g\8\3\m\v\b\l\e\7\i\3\g\b\5\o\u\d\q\6\v\1\5\3\6\b\t\6\f\g\g\0\4\u\i\y\u\a\o\d\4\x\a\u\q\7\j\t\w\r\t\n\d\0\o\c\n\q\d\d\q\f\q\3\r\s\b\n\x\8\5\2\r\0\b\m\4\o\c\b\7\6\7\2\2\u\5\f\x\w\l\n\8\b\p\o\i\n\v\u\s\p\s\e\a\v\f\1\m\m\0\p\5\a\4\e\2\f\f\i\u\3\7\9\c\y\f\c\9\c\0\w\5\4\g\7\8\z\5\y\p\w\x\6\a\7\x\v\t\4\g\5\d\m\g\v\d\9\q\t\9\k\e\r\v\q\y\1\l\n\y\h\3\o\6\g\1\9\f\p\z\o\c\v\9\z\3\x\9\e\i\1\y\m\w\5\s\4\g\e\g\q\h\1\b\5\m\v\5\a\o\g\0\r\z\d\1\z\w\o\g\h\c\r\m\o\h\d\1\n\1\x\1\b\f\3\z\b\j\v\t\2\l\7\5\e\c\v\l\7\9\a\9\1\7\9\3\h\b\8\9\e\g\w\r\o\4\x\s\9\s\9\q\6\m\z\z\w\v\9\w\w\j\f\y\8\6\d\7\4\e\j\g\6\2\k\o\8\b\y\x\j\s\b\z\w\w\4\p\5\y\0\n\i\e\4\5\z\3\a\1\v\w\q\b\4\a\k\n\h\9\h\f\i\q\d\8\3\3\4\p\e\h\o\o\e\p\o\g\8\3\e\7\s\c\z\y\m\o\p\j\z\a\7\q\n\c\n\i\o\b\l\o\9\y\i\p\x\1\9\8\n\x\n\c\z\f\5\z\y\m\s\c\0\z\2\y\6\m\k\y\h\c\4\j\v\0\a\2\2\q\7\c\a\8\l\c\6\6\9\w\u\g\u\0\u\v\o\3\g\2\j\t\h\f\p\p\1\u\4\1\f\n\t\3\f\l\k\9\z\w\o\b\x\7\z\e\e\t\y\2\2\p\z\g\s\c\5\a\d\p\b\1\k\c\2\e\u\j\c\s\p\8\h\8\3\c\7\t\o\n\l\w\p\w\u\r\b\t\y\2\9\f\g\2\l\e\5\a\p\v\0\7\f\3\7\5\0\i\v\y\h\p\c\3\f\k\a\8\4\s\1\q\f\j\q\l\2\1\3\w\p\0\1\f\y\o\1\b\c\w\v\9\b\6\g\h\w\k\u\p\a\7\5\w\4\5\h\0\o\4\d\6\d\f\1\o\t\8\b\r\r\f\6\b\l\9\0\9\o\z\4\r\s\v\r\a\l\p\g\i\u\n\v\0\l\r\u\c\0\g\n\c\0\v\r\j\o\z\0\5\d\g\5\0\o\p\m\8\l\s\7\3\t\p\0\h\i\4\f\a\k\x\f\f\1\o\b\e\i\x\6\k\x\x\d\r\0\4\1\7\m\c\i\f\9\m\r\o\6\v\u\r\f\8\9\w\i\v\7\0\5\b\5\e\7\9\8\z\d\o\g\k\x\p\p\k\7\1\3\c\4\i\a\q\f\m\g\s\6\d\a\h\t\x\k\l\x\b\o\5\f\n\c\t\2\1\b\f\x\g\m\3\e\y\o\j\w\0\7\w\i\r\t\b\d\f\f\2\t\g\i\j\m\f\s\m\5\c\v\y\d\r\x\h\x\8\6\s\2\e\b\a\g\h\i\9\8\4\u\x\q\i\n\s\c\f\e\1\b\3\t\g\9\e\0\5\y\d\b\p\a\4\p\r\y\n\v\j\2\z\a\7\q\n\7\k\e\d\m\z\a\o\7\1\b\a\f\k\t\g\f\h\t\9\7\v\u\x\z\6\7\7\b\v\v\4\x\9\1\f\j\e\b\y\o\z\1\r\w\3\d\9\p\i\l\9\m\k\w\9\b\5\t\l\3\1\p\p\s\9\z\a\p\v\e\9\m\i\9\e\6\8\r\f\i\l\a\a\r\0\q\z\j\e\i\l\6\o\n\6\d\p\j\9\r\m\7\z\d\6\t\k\3\1\z\w\x\v\z\x\g\l\q\b\4\d\k\b\a\8\h\f\l\7\o\x\6\c\p\z\v\0\p\x\w\s\h\w\x\e\j\0\0\j\s\7\i\1\e\o\w\j\7\z\9\l\j\c\u\q\u\s\z\g\d\k\x\y\t\t\6\k\u\k\x\h\e\7\j\v\e\8\d\5\1\s\3\6\t\t\1\g\l\9\r\g\x\j\d\r\e\p\b\z\g\f\g\3\o\i\s\6\0\u\9\s\9\b\3\9\y\h\s\0\2\y\x\x\5\2\j\c\o\d\1\t\o\x\i\7\n\s\6\7\x\z\b\7\w\j\n\t\1\q\g\9\3\e\e\v\b\a\v\v\n\j\l\c\3\1\v\q\j\i\3\z\s\w\0\7\x\u\7\0\c\d\3\n\9\v\j\w\4\e\9\1\r\a\4\b\u\w\2\v\t\p\i\3\e\5\n\6\2\g\8\b\q\7\2\0\z\z\e\9\c\h\f\j\8\r\5\x\n\w\k\8\1\l\8\t\h\k\d\j\3\g\6\g\4\2\5\l\t\7\y\n\q\e\a\b\a\c\j\3\5\m\f\0\c\5\h\n\d\f\3\i\p\5\b\d\e\3\7\o\q\1\q\g\n\q\7\s\2\p\e ]] 00:06:06.320 ************************************ 00:06:06.320 END TEST dd_rw_offset 00:06:06.320 ************************************ 00:06:06.320 00:06:06.320 real 0m0.998s 00:06:06.320 user 0m0.718s 00:06:06.320 sys 0m0.365s 00:06:06.320 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.320 16:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.320 16:09:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.579 [2024-07-12 16:09:50.085058] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:06.579 [2024-07-12 16:09:50.085161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62754 ] 00:06:06.579 { 00:06:06.579 "subsystems": [ 00:06:06.579 { 00:06:06.579 "subsystem": "bdev", 00:06:06.579 "config": [ 00:06:06.579 { 00:06:06.579 "params": { 00:06:06.579 "trtype": "pcie", 00:06:06.579 "traddr": "0000:00:10.0", 00:06:06.579 "name": "Nvme0" 00:06:06.579 }, 00:06:06.579 "method": "bdev_nvme_attach_controller" 00:06:06.579 }, 00:06:06.579 { 00:06:06.579 "method": "bdev_wait_for_examine" 00:06:06.579 } 00:06:06.579 ] 00:06:06.579 } 00:06:06.579 ] 00:06:06.579 } 00:06:06.579 [2024-07-12 16:09:50.222265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.579 [2024-07-12 16:09:50.271997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.579 [2024-07-12 16:09:50.298614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.838  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:06.838 00:06:06.838 16:09:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.838 00:06:06.838 real 0m14.662s 00:06:06.838 user 0m10.816s 00:06:06.838 sys 0m4.293s 00:06:06.838 ************************************ 00:06:06.838 END TEST spdk_dd_basic_rw 00:06:06.838 ************************************ 00:06:06.838 16:09:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.838 16:09:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.097 16:09:50 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:07.097 16:09:50 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:07.097 16:09:50 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.097 16:09:50 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.097 16:09:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:07.097 ************************************ 00:06:07.097 START TEST spdk_dd_posix 00:06:07.097 ************************************ 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:07.097 * Looking for test storage... 00:06:07.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:07.097 * First test run, liburing in use 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.097 ************************************ 00:06:07.097 START TEST dd_flag_append 00:06:07.097 ************************************ 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hrvhgyksgr86qlv828vyllauo5641t0g 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=oizxi08fstp2jbmzzus9lrcai0lq9g8o 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hrvhgyksgr86qlv828vyllauo5641t0g 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s oizxi08fstp2jbmzzus9lrcai0lq9g8o 00:06:07.097 16:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:07.097 [2024-07-12 16:09:50.736778] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:07.097 [2024-07-12 16:09:50.736889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62818 ] 00:06:07.356 [2024-07-12 16:09:50.874719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.356 [2024-07-12 16:09:50.923134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.356 [2024-07-12 16:09:50.949385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.614  Copying: 32/32 [B] (average 31 kBps) 00:06:07.614 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ oizxi08fstp2jbmzzus9lrcai0lq9g8ohrvhgyksgr86qlv828vyllauo5641t0g == \o\i\z\x\i\0\8\f\s\t\p\2\j\b\m\z\z\u\s\9\l\r\c\a\i\0\l\q\9\g\8\o\h\r\v\h\g\y\k\s\g\r\8\6\q\l\v\8\2\8\v\y\l\l\a\u\o\5\6\4\1\t\0\g ]] 00:06:07.614 00:06:07.614 real 0m0.427s 00:06:07.614 user 0m0.224s 00:06:07.614 sys 0m0.174s 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:07.614 ************************************ 00:06:07.614 END TEST dd_flag_append 00:06:07.614 ************************************ 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.614 16:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.614 ************************************ 00:06:07.614 START TEST dd_flag_directory 00:06:07.614 ************************************ 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.615 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.615 [2024-07-12 16:09:51.220220] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:07.615 [2024-07-12 16:09:51.220327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62846 ] 00:06:07.873 [2024-07-12 16:09:51.358664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.873 [2024-07-12 16:09:51.411579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.873 [2024-07-12 16:09:51.439221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.874 [2024-07-12 16:09:51.455088] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.874 [2024-07-12 16:09:51.455155] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.874 [2024-07-12 16:09:51.455184] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.874 [2024-07-12 16:09:51.514058] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.874 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:08.131 [2024-07-12 16:09:51.646466] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:08.131 [2024-07-12 16:09:51.646567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:06:08.131 [2024-07-12 16:09:51.783055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.131 [2024-07-12 16:09:51.832970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.390 [2024-07-12 16:09:51.859168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.390 [2024-07-12 16:09:51.874753] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:08.390 [2024-07-12 16:09:51.874803] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:08.390 [2024-07-12 16:09:51.874832] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.390 [2024-07-12 16:09:51.929323] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.390 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:08.390 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.390 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:08.390 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:08.390 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:08.390 16:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.390 00:06:08.390 real 0m0.840s 00:06:08.390 user 0m0.448s 00:06:08.390 sys 0m0.183s 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.390 ************************************ 00:06:08.390 END TEST dd_flag_directory 00:06:08.390 ************************************ 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.390 ************************************ 00:06:08.390 START TEST dd_flag_nofollow 00:06:08.390 ************************************ 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.390 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.390 [2024-07-12 16:09:52.112657] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:08.649 [2024-07-12 16:09:52.113257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62880 ] 00:06:08.649 [2024-07-12 16:09:52.250525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.649 [2024-07-12 16:09:52.297959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.649 [2024-07-12 16:09:52.323896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.649 [2024-07-12 16:09:52.339294] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:08.649 [2024-07-12 16:09:52.339611] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:08.649 [2024-07-12 16:09:52.339753] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.907 [2024-07-12 16:09:52.395369] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.907 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.908 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.908 [2024-07-12 16:09:52.528460] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:08.908 [2024-07-12 16:09:52.528582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62889 ] 00:06:09.166 [2024-07-12 16:09:52.667194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.166 [2024-07-12 16:09:52.716263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.166 [2024-07-12 16:09:52.741252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.166 [2024-07-12 16:09:52.755909] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:09.166 [2024-07-12 16:09:52.755955] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:09.166 [2024-07-12 16:09:52.755968] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.166 [2024-07-12 16:09:52.810108] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.166 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:09.166 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 16:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.424 [2024-07-12 16:09:52.956883] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:09.424 [2024-07-12 16:09:52.957138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62897 ] 00:06:09.424 [2024-07-12 16:09:53.093167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.424 [2024-07-12 16:09:53.142079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.682 [2024-07-12 16:09:53.169091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.682  Copying: 512/512 [B] (average 500 kBps) 00:06:09.682 00:06:09.682 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 4oli30trqqz6qtum8a44or1p1ens6ht3xa1tu6md4cvpfl5mmbxtur8hv1f9ydyuwqab19ott8m0ise2pqwls31isu6wb54kwoyei9icnh0xnou7oyu0epz40xwo8ah38mgxh2o2ex1neihf2rfs13pji5reu74faoqlscs49fqx3dwbhmrj9y8hpsub7emo7wstze47isy9gapxde4h0u56sixiwsrzjvqm6mw6msfynokw6ardv26zavimvef69mg4l7m19y3r7azkcq60eg2pviuh9oygmxhqdnftfv5vw1odxwmhuxkfqt2gh7rvr0a7bksrhxmvik9euy9zf9enp5ufpmunrvjadrv8hj18unax307hg5sm0ft54d9z7hljhp8pj5at7twfnocxyvtp9dbq50ws32wqsobjkl51ofxszgykl7yww1dg79bvtvbrx292hop4idgp8wfw6lbx7xtjk5on68dajrq1e88ae0kcle8ifdes6ttporvu == \4\o\l\i\3\0\t\r\q\q\z\6\q\t\u\m\8\a\4\4\o\r\1\p\1\e\n\s\6\h\t\3\x\a\1\t\u\6\m\d\4\c\v\p\f\l\5\m\m\b\x\t\u\r\8\h\v\1\f\9\y\d\y\u\w\q\a\b\1\9\o\t\t\8\m\0\i\s\e\2\p\q\w\l\s\3\1\i\s\u\6\w\b\5\4\k\w\o\y\e\i\9\i\c\n\h\0\x\n\o\u\7\o\y\u\0\e\p\z\4\0\x\w\o\8\a\h\3\8\m\g\x\h\2\o\2\e\x\1\n\e\i\h\f\2\r\f\s\1\3\p\j\i\5\r\e\u\7\4\f\a\o\q\l\s\c\s\4\9\f\q\x\3\d\w\b\h\m\r\j\9\y\8\h\p\s\u\b\7\e\m\o\7\w\s\t\z\e\4\7\i\s\y\9\g\a\p\x\d\e\4\h\0\u\5\6\s\i\x\i\w\s\r\z\j\v\q\m\6\m\w\6\m\s\f\y\n\o\k\w\6\a\r\d\v\2\6\z\a\v\i\m\v\e\f\6\9\m\g\4\l\7\m\1\9\y\3\r\7\a\z\k\c\q\6\0\e\g\2\p\v\i\u\h\9\o\y\g\m\x\h\q\d\n\f\t\f\v\5\v\w\1\o\d\x\w\m\h\u\x\k\f\q\t\2\g\h\7\r\v\r\0\a\7\b\k\s\r\h\x\m\v\i\k\9\e\u\y\9\z\f\9\e\n\p\5\u\f\p\m\u\n\r\v\j\a\d\r\v\8\h\j\1\8\u\n\a\x\3\0\7\h\g\5\s\m\0\f\t\5\4\d\9\z\7\h\l\j\h\p\8\p\j\5\a\t\7\t\w\f\n\o\c\x\y\v\t\p\9\d\b\q\5\0\w\s\3\2\w\q\s\o\b\j\k\l\5\1\o\f\x\s\z\g\y\k\l\7\y\w\w\1\d\g\7\9\b\v\t\v\b\r\x\2\9\2\h\o\p\4\i\d\g\p\8\w\f\w\6\l\b\x\7\x\t\j\k\5\o\n\6\8\d\a\j\r\q\1\e\8\8\a\e\0\k\c\l\e\8\i\f\d\e\s\6\t\t\p\o\r\v\u ]] 00:06:09.682 00:06:09.682 real 0m1.267s 00:06:09.682 user 0m0.694s 00:06:09.682 sys 0m0.333s 00:06:09.682 ************************************ 00:06:09.682 END TEST dd_flag_nofollow 00:06:09.682 ************************************ 00:06:09.682 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 ************************************ 00:06:09.683 START TEST dd_flag_noatime 00:06:09.683 ************************************ 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720800593 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720800593 00:06:09.683 16:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:11.058 16:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.058 [2024-07-12 16:09:54.445519] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:11.058 [2024-07-12 16:09:54.446115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62939 ] 00:06:11.058 [2024-07-12 16:09:54.585735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.058 [2024-07-12 16:09:54.653506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.058 [2024-07-12 16:09:54.685602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.317  Copying: 512/512 [B] (average 500 kBps) 00:06:11.317 00:06:11.317 16:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.317 16:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720800593 )) 00:06:11.317 16:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.317 16:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720800593 )) 00:06:11.317 16:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.317 [2024-07-12 16:09:54.925496] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:11.317 [2024-07-12 16:09:54.925585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62953 ] 00:06:11.576 [2024-07-12 16:09:55.061859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.576 [2024-07-12 16:09:55.117366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.576 [2024-07-12 16:09:55.143613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.576  Copying: 512/512 [B] (average 500 kBps) 00:06:11.576 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.835 ************************************ 00:06:11.835 END TEST dd_flag_noatime 00:06:11.835 ************************************ 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720800595 )) 00:06:11.835 00:06:11.835 real 0m1.937s 00:06:11.835 user 0m0.510s 00:06:11.835 sys 0m0.367s 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.835 ************************************ 00:06:11.835 START TEST dd_flags_misc 00:06:11.835 ************************************ 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.835 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:11.835 [2024-07-12 16:09:55.423567] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:11.835 [2024-07-12 16:09:55.423683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62981 ] 00:06:12.094 [2024-07-12 16:09:55.562282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.094 [2024-07-12 16:09:55.613028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.094 [2024-07-12 16:09:55.639500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.094  Copying: 512/512 [B] (average 500 kBps) 00:06:12.094 00:06:12.094 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fjhdzita3zkpjdbd2d4grayeqvkpyp3xvntoz2yq53z3ela3gjq6o172g87f4d3k9u3yhdz1sz8jmw11fcs8pz8hcogww9hn2xbp513p9firuoh7qunukpxgahqs3qfbelnkz7tktwoefsvpcotdvxwazg644bgg9uxka6ogrl75m6z8xopx8n8y465phm6hwttbaok3eyco2qm5x2w35ze7yysc5uyq6c44yd8cc0nwetuifpswb0260ai4ry3lser52ysnpwr6clogvge0bz55gluymzh4n6gad548q50r4j6z5hpcuhziymyifq1v5yqgcyzgl3b17veg853tb7cg10onc109wqsbmd28kabromcoqhw5p7x7edqnrsxookaa3f0iaua39zq7h8xmmw0xdpcvv2bbdxuenz4md98h9avwojn9hh7gqwkjj0bqql5z62z81wb6bg5l2lerkyobs7cqggkn48nfc13nqn1ng7moqdwm8t8ahkns8qc2 == \f\j\h\d\z\i\t\a\3\z\k\p\j\d\b\d\2\d\4\g\r\a\y\e\q\v\k\p\y\p\3\x\v\n\t\o\z\2\y\q\5\3\z\3\e\l\a\3\g\j\q\6\o\1\7\2\g\8\7\f\4\d\3\k\9\u\3\y\h\d\z\1\s\z\8\j\m\w\1\1\f\c\s\8\p\z\8\h\c\o\g\w\w\9\h\n\2\x\b\p\5\1\3\p\9\f\i\r\u\o\h\7\q\u\n\u\k\p\x\g\a\h\q\s\3\q\f\b\e\l\n\k\z\7\t\k\t\w\o\e\f\s\v\p\c\o\t\d\v\x\w\a\z\g\6\4\4\b\g\g\9\u\x\k\a\6\o\g\r\l\7\5\m\6\z\8\x\o\p\x\8\n\8\y\4\6\5\p\h\m\6\h\w\t\t\b\a\o\k\3\e\y\c\o\2\q\m\5\x\2\w\3\5\z\e\7\y\y\s\c\5\u\y\q\6\c\4\4\y\d\8\c\c\0\n\w\e\t\u\i\f\p\s\w\b\0\2\6\0\a\i\4\r\y\3\l\s\e\r\5\2\y\s\n\p\w\r\6\c\l\o\g\v\g\e\0\b\z\5\5\g\l\u\y\m\z\h\4\n\6\g\a\d\5\4\8\q\5\0\r\4\j\6\z\5\h\p\c\u\h\z\i\y\m\y\i\f\q\1\v\5\y\q\g\c\y\z\g\l\3\b\1\7\v\e\g\8\5\3\t\b\7\c\g\1\0\o\n\c\1\0\9\w\q\s\b\m\d\2\8\k\a\b\r\o\m\c\o\q\h\w\5\p\7\x\7\e\d\q\n\r\s\x\o\o\k\a\a\3\f\0\i\a\u\a\3\9\z\q\7\h\8\x\m\m\w\0\x\d\p\c\v\v\2\b\b\d\x\u\e\n\z\4\m\d\9\8\h\9\a\v\w\o\j\n\9\h\h\7\g\q\w\k\j\j\0\b\q\q\l\5\z\6\2\z\8\1\w\b\6\b\g\5\l\2\l\e\r\k\y\o\b\s\7\c\q\g\g\k\n\4\8\n\f\c\1\3\n\q\n\1\n\g\7\m\o\q\d\w\m\8\t\8\a\h\k\n\s\8\q\c\2 ]] 00:06:12.094 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.094 16:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:12.354 [2024-07-12 16:09:55.844892] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:12.354 [2024-07-12 16:09:55.844983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62991 ] 00:06:12.354 [2024-07-12 16:09:55.977124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.354 [2024-07-12 16:09:56.025093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.354 [2024-07-12 16:09:56.050642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.613  Copying: 512/512 [B] (average 500 kBps) 00:06:12.613 00:06:12.613 16:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fjhdzita3zkpjdbd2d4grayeqvkpyp3xvntoz2yq53z3ela3gjq6o172g87f4d3k9u3yhdz1sz8jmw11fcs8pz8hcogww9hn2xbp513p9firuoh7qunukpxgahqs3qfbelnkz7tktwoefsvpcotdvxwazg644bgg9uxka6ogrl75m6z8xopx8n8y465phm6hwttbaok3eyco2qm5x2w35ze7yysc5uyq6c44yd8cc0nwetuifpswb0260ai4ry3lser52ysnpwr6clogvge0bz55gluymzh4n6gad548q50r4j6z5hpcuhziymyifq1v5yqgcyzgl3b17veg853tb7cg10onc109wqsbmd28kabromcoqhw5p7x7edqnrsxookaa3f0iaua39zq7h8xmmw0xdpcvv2bbdxuenz4md98h9avwojn9hh7gqwkjj0bqql5z62z81wb6bg5l2lerkyobs7cqggkn48nfc13nqn1ng7moqdwm8t8ahkns8qc2 == \f\j\h\d\z\i\t\a\3\z\k\p\j\d\b\d\2\d\4\g\r\a\y\e\q\v\k\p\y\p\3\x\v\n\t\o\z\2\y\q\5\3\z\3\e\l\a\3\g\j\q\6\o\1\7\2\g\8\7\f\4\d\3\k\9\u\3\y\h\d\z\1\s\z\8\j\m\w\1\1\f\c\s\8\p\z\8\h\c\o\g\w\w\9\h\n\2\x\b\p\5\1\3\p\9\f\i\r\u\o\h\7\q\u\n\u\k\p\x\g\a\h\q\s\3\q\f\b\e\l\n\k\z\7\t\k\t\w\o\e\f\s\v\p\c\o\t\d\v\x\w\a\z\g\6\4\4\b\g\g\9\u\x\k\a\6\o\g\r\l\7\5\m\6\z\8\x\o\p\x\8\n\8\y\4\6\5\p\h\m\6\h\w\t\t\b\a\o\k\3\e\y\c\o\2\q\m\5\x\2\w\3\5\z\e\7\y\y\s\c\5\u\y\q\6\c\4\4\y\d\8\c\c\0\n\w\e\t\u\i\f\p\s\w\b\0\2\6\0\a\i\4\r\y\3\l\s\e\r\5\2\y\s\n\p\w\r\6\c\l\o\g\v\g\e\0\b\z\5\5\g\l\u\y\m\z\h\4\n\6\g\a\d\5\4\8\q\5\0\r\4\j\6\z\5\h\p\c\u\h\z\i\y\m\y\i\f\q\1\v\5\y\q\g\c\y\z\g\l\3\b\1\7\v\e\g\8\5\3\t\b\7\c\g\1\0\o\n\c\1\0\9\w\q\s\b\m\d\2\8\k\a\b\r\o\m\c\o\q\h\w\5\p\7\x\7\e\d\q\n\r\s\x\o\o\k\a\a\3\f\0\i\a\u\a\3\9\z\q\7\h\8\x\m\m\w\0\x\d\p\c\v\v\2\b\b\d\x\u\e\n\z\4\m\d\9\8\h\9\a\v\w\o\j\n\9\h\h\7\g\q\w\k\j\j\0\b\q\q\l\5\z\6\2\z\8\1\w\b\6\b\g\5\l\2\l\e\r\k\y\o\b\s\7\c\q\g\g\k\n\4\8\n\f\c\1\3\n\q\n\1\n\g\7\m\o\q\d\w\m\8\t\8\a\h\k\n\s\8\q\c\2 ]] 00:06:12.613 16:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.613 16:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:12.613 [2024-07-12 16:09:56.272604] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:12.613 [2024-07-12 16:09:56.272693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62995 ] 00:06:12.872 [2024-07-12 16:09:56.405745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.872 [2024-07-12 16:09:56.455092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.872 [2024-07-12 16:09:56.480720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.132  Copying: 512/512 [B] (average 100 kBps) 00:06:13.132 00:06:13.132 16:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fjhdzita3zkpjdbd2d4grayeqvkpyp3xvntoz2yq53z3ela3gjq6o172g87f4d3k9u3yhdz1sz8jmw11fcs8pz8hcogww9hn2xbp513p9firuoh7qunukpxgahqs3qfbelnkz7tktwoefsvpcotdvxwazg644bgg9uxka6ogrl75m6z8xopx8n8y465phm6hwttbaok3eyco2qm5x2w35ze7yysc5uyq6c44yd8cc0nwetuifpswb0260ai4ry3lser52ysnpwr6clogvge0bz55gluymzh4n6gad548q50r4j6z5hpcuhziymyifq1v5yqgcyzgl3b17veg853tb7cg10onc109wqsbmd28kabromcoqhw5p7x7edqnrsxookaa3f0iaua39zq7h8xmmw0xdpcvv2bbdxuenz4md98h9avwojn9hh7gqwkjj0bqql5z62z81wb6bg5l2lerkyobs7cqggkn48nfc13nqn1ng7moqdwm8t8ahkns8qc2 == \f\j\h\d\z\i\t\a\3\z\k\p\j\d\b\d\2\d\4\g\r\a\y\e\q\v\k\p\y\p\3\x\v\n\t\o\z\2\y\q\5\3\z\3\e\l\a\3\g\j\q\6\o\1\7\2\g\8\7\f\4\d\3\k\9\u\3\y\h\d\z\1\s\z\8\j\m\w\1\1\f\c\s\8\p\z\8\h\c\o\g\w\w\9\h\n\2\x\b\p\5\1\3\p\9\f\i\r\u\o\h\7\q\u\n\u\k\p\x\g\a\h\q\s\3\q\f\b\e\l\n\k\z\7\t\k\t\w\o\e\f\s\v\p\c\o\t\d\v\x\w\a\z\g\6\4\4\b\g\g\9\u\x\k\a\6\o\g\r\l\7\5\m\6\z\8\x\o\p\x\8\n\8\y\4\6\5\p\h\m\6\h\w\t\t\b\a\o\k\3\e\y\c\o\2\q\m\5\x\2\w\3\5\z\e\7\y\y\s\c\5\u\y\q\6\c\4\4\y\d\8\c\c\0\n\w\e\t\u\i\f\p\s\w\b\0\2\6\0\a\i\4\r\y\3\l\s\e\r\5\2\y\s\n\p\w\r\6\c\l\o\g\v\g\e\0\b\z\5\5\g\l\u\y\m\z\h\4\n\6\g\a\d\5\4\8\q\5\0\r\4\j\6\z\5\h\p\c\u\h\z\i\y\m\y\i\f\q\1\v\5\y\q\g\c\y\z\g\l\3\b\1\7\v\e\g\8\5\3\t\b\7\c\g\1\0\o\n\c\1\0\9\w\q\s\b\m\d\2\8\k\a\b\r\o\m\c\o\q\h\w\5\p\7\x\7\e\d\q\n\r\s\x\o\o\k\a\a\3\f\0\i\a\u\a\3\9\z\q\7\h\8\x\m\m\w\0\x\d\p\c\v\v\2\b\b\d\x\u\e\n\z\4\m\d\9\8\h\9\a\v\w\o\j\n\9\h\h\7\g\q\w\k\j\j\0\b\q\q\l\5\z\6\2\z\8\1\w\b\6\b\g\5\l\2\l\e\r\k\y\o\b\s\7\c\q\g\g\k\n\4\8\n\f\c\1\3\n\q\n\1\n\g\7\m\o\q\d\w\m\8\t\8\a\h\k\n\s\8\q\c\2 ]] 00:06:13.132 16:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.132 16:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:13.132 [2024-07-12 16:09:56.685392] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:13.132 [2024-07-12 16:09:56.685481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63004 ] 00:06:13.132 [2024-07-12 16:09:56.823671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.391 [2024-07-12 16:09:56.882425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.391 [2024-07-12 16:09:56.911981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.391  Copying: 512/512 [B] (average 250 kBps) 00:06:13.391 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fjhdzita3zkpjdbd2d4grayeqvkpyp3xvntoz2yq53z3ela3gjq6o172g87f4d3k9u3yhdz1sz8jmw11fcs8pz8hcogww9hn2xbp513p9firuoh7qunukpxgahqs3qfbelnkz7tktwoefsvpcotdvxwazg644bgg9uxka6ogrl75m6z8xopx8n8y465phm6hwttbaok3eyco2qm5x2w35ze7yysc5uyq6c44yd8cc0nwetuifpswb0260ai4ry3lser52ysnpwr6clogvge0bz55gluymzh4n6gad548q50r4j6z5hpcuhziymyifq1v5yqgcyzgl3b17veg853tb7cg10onc109wqsbmd28kabromcoqhw5p7x7edqnrsxookaa3f0iaua39zq7h8xmmw0xdpcvv2bbdxuenz4md98h9avwojn9hh7gqwkjj0bqql5z62z81wb6bg5l2lerkyobs7cqggkn48nfc13nqn1ng7moqdwm8t8ahkns8qc2 == \f\j\h\d\z\i\t\a\3\z\k\p\j\d\b\d\2\d\4\g\r\a\y\e\q\v\k\p\y\p\3\x\v\n\t\o\z\2\y\q\5\3\z\3\e\l\a\3\g\j\q\6\o\1\7\2\g\8\7\f\4\d\3\k\9\u\3\y\h\d\z\1\s\z\8\j\m\w\1\1\f\c\s\8\p\z\8\h\c\o\g\w\w\9\h\n\2\x\b\p\5\1\3\p\9\f\i\r\u\o\h\7\q\u\n\u\k\p\x\g\a\h\q\s\3\q\f\b\e\l\n\k\z\7\t\k\t\w\o\e\f\s\v\p\c\o\t\d\v\x\w\a\z\g\6\4\4\b\g\g\9\u\x\k\a\6\o\g\r\l\7\5\m\6\z\8\x\o\p\x\8\n\8\y\4\6\5\p\h\m\6\h\w\t\t\b\a\o\k\3\e\y\c\o\2\q\m\5\x\2\w\3\5\z\e\7\y\y\s\c\5\u\y\q\6\c\4\4\y\d\8\c\c\0\n\w\e\t\u\i\f\p\s\w\b\0\2\6\0\a\i\4\r\y\3\l\s\e\r\5\2\y\s\n\p\w\r\6\c\l\o\g\v\g\e\0\b\z\5\5\g\l\u\y\m\z\h\4\n\6\g\a\d\5\4\8\q\5\0\r\4\j\6\z\5\h\p\c\u\h\z\i\y\m\y\i\f\q\1\v\5\y\q\g\c\y\z\g\l\3\b\1\7\v\e\g\8\5\3\t\b\7\c\g\1\0\o\n\c\1\0\9\w\q\s\b\m\d\2\8\k\a\b\r\o\m\c\o\q\h\w\5\p\7\x\7\e\d\q\n\r\s\x\o\o\k\a\a\3\f\0\i\a\u\a\3\9\z\q\7\h\8\x\m\m\w\0\x\d\p\c\v\v\2\b\b\d\x\u\e\n\z\4\m\d\9\8\h\9\a\v\w\o\j\n\9\h\h\7\g\q\w\k\j\j\0\b\q\q\l\5\z\6\2\z\8\1\w\b\6\b\g\5\l\2\l\e\r\k\y\o\b\s\7\c\q\g\g\k\n\4\8\n\f\c\1\3\n\q\n\1\n\g\7\m\o\q\d\w\m\8\t\8\a\h\k\n\s\8\q\c\2 ]] 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.391 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:13.650 [2024-07-12 16:09:57.140541] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:13.650 [2024-07-12 16:09:57.140666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63014 ] 00:06:13.650 [2024-07-12 16:09:57.276445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.650 [2024-07-12 16:09:57.326823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.650 [2024-07-12 16:09:57.353009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.909  Copying: 512/512 [B] (average 500 kBps) 00:06:13.909 00:06:13.909 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1ha05y2wlcb3e3aahivquys4pvwytaoyxwwsttg4v2lop48nqut02rvp0zl7aaqz41w95h5q9pnk6k32fk839cxzof5rhvpexx9akem0vhlkw92juw6yhqp70bgpr1bdf98mdp7hpidgv9w5v7qnyt3fp2sorzp0vtxd5hzz1pgg4y2kyj8ezn1x4jslfqakz87ymzfyey0klhgd108dtm23pkosmgl36dqin97zgqikb6sr8efpvwyc0rwxrzh98kqj4veztfdv7hdk9n9fua8j9qt027jnzgxmoid92t0jj5m61m5bpr9lxqw9nfz6zb3dkzwgvcgthcsqjc5rgch132sqbivt4szouhj3k758aj8ghr0g69d18g081m1yyf4f3p6f64wq4jn3o33ygbax8nivc1fu7syqni6g50975n5xz82pvvvzjvp69kpptbpj8wijkua2gvcpg7vcve4l611wbmt28dwgflklattqfv24vjeywnjlvcxf1rck == \1\h\a\0\5\y\2\w\l\c\b\3\e\3\a\a\h\i\v\q\u\y\s\4\p\v\w\y\t\a\o\y\x\w\w\s\t\t\g\4\v\2\l\o\p\4\8\n\q\u\t\0\2\r\v\p\0\z\l\7\a\a\q\z\4\1\w\9\5\h\5\q\9\p\n\k\6\k\3\2\f\k\8\3\9\c\x\z\o\f\5\r\h\v\p\e\x\x\9\a\k\e\m\0\v\h\l\k\w\9\2\j\u\w\6\y\h\q\p\7\0\b\g\p\r\1\b\d\f\9\8\m\d\p\7\h\p\i\d\g\v\9\w\5\v\7\q\n\y\t\3\f\p\2\s\o\r\z\p\0\v\t\x\d\5\h\z\z\1\p\g\g\4\y\2\k\y\j\8\e\z\n\1\x\4\j\s\l\f\q\a\k\z\8\7\y\m\z\f\y\e\y\0\k\l\h\g\d\1\0\8\d\t\m\2\3\p\k\o\s\m\g\l\3\6\d\q\i\n\9\7\z\g\q\i\k\b\6\s\r\8\e\f\p\v\w\y\c\0\r\w\x\r\z\h\9\8\k\q\j\4\v\e\z\t\f\d\v\7\h\d\k\9\n\9\f\u\a\8\j\9\q\t\0\2\7\j\n\z\g\x\m\o\i\d\9\2\t\0\j\j\5\m\6\1\m\5\b\p\r\9\l\x\q\w\9\n\f\z\6\z\b\3\d\k\z\w\g\v\c\g\t\h\c\s\q\j\c\5\r\g\c\h\1\3\2\s\q\b\i\v\t\4\s\z\o\u\h\j\3\k\7\5\8\a\j\8\g\h\r\0\g\6\9\d\1\8\g\0\8\1\m\1\y\y\f\4\f\3\p\6\f\6\4\w\q\4\j\n\3\o\3\3\y\g\b\a\x\8\n\i\v\c\1\f\u\7\s\y\q\n\i\6\g\5\0\9\7\5\n\5\x\z\8\2\p\v\v\v\z\j\v\p\6\9\k\p\p\t\b\p\j\8\w\i\j\k\u\a\2\g\v\c\p\g\7\v\c\v\e\4\l\6\1\1\w\b\m\t\2\8\d\w\g\f\l\k\l\a\t\t\q\f\v\2\4\v\j\e\y\w\n\j\l\v\c\x\f\1\r\c\k ]] 00:06:13.909 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.909 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:13.909 [2024-07-12 16:09:57.580882] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:13.909 [2024-07-12 16:09:57.580973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63023 ] 00:06:14.168 [2024-07-12 16:09:57.719218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.168 [2024-07-12 16:09:57.772029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.168 [2024-07-12 16:09:57.797540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.428  Copying: 512/512 [B] (average 500 kBps) 00:06:14.428 00:06:14.428 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1ha05y2wlcb3e3aahivquys4pvwytaoyxwwsttg4v2lop48nqut02rvp0zl7aaqz41w95h5q9pnk6k32fk839cxzof5rhvpexx9akem0vhlkw92juw6yhqp70bgpr1bdf98mdp7hpidgv9w5v7qnyt3fp2sorzp0vtxd5hzz1pgg4y2kyj8ezn1x4jslfqakz87ymzfyey0klhgd108dtm23pkosmgl36dqin97zgqikb6sr8efpvwyc0rwxrzh98kqj4veztfdv7hdk9n9fua8j9qt027jnzgxmoid92t0jj5m61m5bpr9lxqw9nfz6zb3dkzwgvcgthcsqjc5rgch132sqbivt4szouhj3k758aj8ghr0g69d18g081m1yyf4f3p6f64wq4jn3o33ygbax8nivc1fu7syqni6g50975n5xz82pvvvzjvp69kpptbpj8wijkua2gvcpg7vcve4l611wbmt28dwgflklattqfv24vjeywnjlvcxf1rck == \1\h\a\0\5\y\2\w\l\c\b\3\e\3\a\a\h\i\v\q\u\y\s\4\p\v\w\y\t\a\o\y\x\w\w\s\t\t\g\4\v\2\l\o\p\4\8\n\q\u\t\0\2\r\v\p\0\z\l\7\a\a\q\z\4\1\w\9\5\h\5\q\9\p\n\k\6\k\3\2\f\k\8\3\9\c\x\z\o\f\5\r\h\v\p\e\x\x\9\a\k\e\m\0\v\h\l\k\w\9\2\j\u\w\6\y\h\q\p\7\0\b\g\p\r\1\b\d\f\9\8\m\d\p\7\h\p\i\d\g\v\9\w\5\v\7\q\n\y\t\3\f\p\2\s\o\r\z\p\0\v\t\x\d\5\h\z\z\1\p\g\g\4\y\2\k\y\j\8\e\z\n\1\x\4\j\s\l\f\q\a\k\z\8\7\y\m\z\f\y\e\y\0\k\l\h\g\d\1\0\8\d\t\m\2\3\p\k\o\s\m\g\l\3\6\d\q\i\n\9\7\z\g\q\i\k\b\6\s\r\8\e\f\p\v\w\y\c\0\r\w\x\r\z\h\9\8\k\q\j\4\v\e\z\t\f\d\v\7\h\d\k\9\n\9\f\u\a\8\j\9\q\t\0\2\7\j\n\z\g\x\m\o\i\d\9\2\t\0\j\j\5\m\6\1\m\5\b\p\r\9\l\x\q\w\9\n\f\z\6\z\b\3\d\k\z\w\g\v\c\g\t\h\c\s\q\j\c\5\r\g\c\h\1\3\2\s\q\b\i\v\t\4\s\z\o\u\h\j\3\k\7\5\8\a\j\8\g\h\r\0\g\6\9\d\1\8\g\0\8\1\m\1\y\y\f\4\f\3\p\6\f\6\4\w\q\4\j\n\3\o\3\3\y\g\b\a\x\8\n\i\v\c\1\f\u\7\s\y\q\n\i\6\g\5\0\9\7\5\n\5\x\z\8\2\p\v\v\v\z\j\v\p\6\9\k\p\p\t\b\p\j\8\w\i\j\k\u\a\2\g\v\c\p\g\7\v\c\v\e\4\l\6\1\1\w\b\m\t\2\8\d\w\g\f\l\k\l\a\t\t\q\f\v\2\4\v\j\e\y\w\n\j\l\v\c\x\f\1\r\c\k ]] 00:06:14.428 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.428 16:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:14.428 [2024-07-12 16:09:58.021922] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:14.428 [2024-07-12 16:09:58.022007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63033 ] 00:06:14.687 [2024-07-12 16:09:58.157581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.687 [2024-07-12 16:09:58.208033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.687 [2024-07-12 16:09:58.237274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.687  Copying: 512/512 [B] (average 166 kBps) 00:06:14.687 00:06:14.687 16:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1ha05y2wlcb3e3aahivquys4pvwytaoyxwwsttg4v2lop48nqut02rvp0zl7aaqz41w95h5q9pnk6k32fk839cxzof5rhvpexx9akem0vhlkw92juw6yhqp70bgpr1bdf98mdp7hpidgv9w5v7qnyt3fp2sorzp0vtxd5hzz1pgg4y2kyj8ezn1x4jslfqakz87ymzfyey0klhgd108dtm23pkosmgl36dqin97zgqikb6sr8efpvwyc0rwxrzh98kqj4veztfdv7hdk9n9fua8j9qt027jnzgxmoid92t0jj5m61m5bpr9lxqw9nfz6zb3dkzwgvcgthcsqjc5rgch132sqbivt4szouhj3k758aj8ghr0g69d18g081m1yyf4f3p6f64wq4jn3o33ygbax8nivc1fu7syqni6g50975n5xz82pvvvzjvp69kpptbpj8wijkua2gvcpg7vcve4l611wbmt28dwgflklattqfv24vjeywnjlvcxf1rck == \1\h\a\0\5\y\2\w\l\c\b\3\e\3\a\a\h\i\v\q\u\y\s\4\p\v\w\y\t\a\o\y\x\w\w\s\t\t\g\4\v\2\l\o\p\4\8\n\q\u\t\0\2\r\v\p\0\z\l\7\a\a\q\z\4\1\w\9\5\h\5\q\9\p\n\k\6\k\3\2\f\k\8\3\9\c\x\z\o\f\5\r\h\v\p\e\x\x\9\a\k\e\m\0\v\h\l\k\w\9\2\j\u\w\6\y\h\q\p\7\0\b\g\p\r\1\b\d\f\9\8\m\d\p\7\h\p\i\d\g\v\9\w\5\v\7\q\n\y\t\3\f\p\2\s\o\r\z\p\0\v\t\x\d\5\h\z\z\1\p\g\g\4\y\2\k\y\j\8\e\z\n\1\x\4\j\s\l\f\q\a\k\z\8\7\y\m\z\f\y\e\y\0\k\l\h\g\d\1\0\8\d\t\m\2\3\p\k\o\s\m\g\l\3\6\d\q\i\n\9\7\z\g\q\i\k\b\6\s\r\8\e\f\p\v\w\y\c\0\r\w\x\r\z\h\9\8\k\q\j\4\v\e\z\t\f\d\v\7\h\d\k\9\n\9\f\u\a\8\j\9\q\t\0\2\7\j\n\z\g\x\m\o\i\d\9\2\t\0\j\j\5\m\6\1\m\5\b\p\r\9\l\x\q\w\9\n\f\z\6\z\b\3\d\k\z\w\g\v\c\g\t\h\c\s\q\j\c\5\r\g\c\h\1\3\2\s\q\b\i\v\t\4\s\z\o\u\h\j\3\k\7\5\8\a\j\8\g\h\r\0\g\6\9\d\1\8\g\0\8\1\m\1\y\y\f\4\f\3\p\6\f\6\4\w\q\4\j\n\3\o\3\3\y\g\b\a\x\8\n\i\v\c\1\f\u\7\s\y\q\n\i\6\g\5\0\9\7\5\n\5\x\z\8\2\p\v\v\v\z\j\v\p\6\9\k\p\p\t\b\p\j\8\w\i\j\k\u\a\2\g\v\c\p\g\7\v\c\v\e\4\l\6\1\1\w\b\m\t\2\8\d\w\g\f\l\k\l\a\t\t\q\f\v\2\4\v\j\e\y\w\n\j\l\v\c\x\f\1\r\c\k ]] 00:06:14.687 16:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.687 16:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:14.946 [2024-07-12 16:09:58.443412] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:14.946 [2024-07-12 16:09:58.443524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63037 ] 00:06:14.946 [2024-07-12 16:09:58.578688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.946 [2024-07-12 16:09:58.627019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.946 [2024-07-12 16:09:58.653609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.205  Copying: 512/512 [B] (average 250 kBps) 00:06:15.205 00:06:15.205 ************************************ 00:06:15.205 END TEST dd_flags_misc 00:06:15.205 ************************************ 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1ha05y2wlcb3e3aahivquys4pvwytaoyxwwsttg4v2lop48nqut02rvp0zl7aaqz41w95h5q9pnk6k32fk839cxzof5rhvpexx9akem0vhlkw92juw6yhqp70bgpr1bdf98mdp7hpidgv9w5v7qnyt3fp2sorzp0vtxd5hzz1pgg4y2kyj8ezn1x4jslfqakz87ymzfyey0klhgd108dtm23pkosmgl36dqin97zgqikb6sr8efpvwyc0rwxrzh98kqj4veztfdv7hdk9n9fua8j9qt027jnzgxmoid92t0jj5m61m5bpr9lxqw9nfz6zb3dkzwgvcgthcsqjc5rgch132sqbivt4szouhj3k758aj8ghr0g69d18g081m1yyf4f3p6f64wq4jn3o33ygbax8nivc1fu7syqni6g50975n5xz82pvvvzjvp69kpptbpj8wijkua2gvcpg7vcve4l611wbmt28dwgflklattqfv24vjeywnjlvcxf1rck == \1\h\a\0\5\y\2\w\l\c\b\3\e\3\a\a\h\i\v\q\u\y\s\4\p\v\w\y\t\a\o\y\x\w\w\s\t\t\g\4\v\2\l\o\p\4\8\n\q\u\t\0\2\r\v\p\0\z\l\7\a\a\q\z\4\1\w\9\5\h\5\q\9\p\n\k\6\k\3\2\f\k\8\3\9\c\x\z\o\f\5\r\h\v\p\e\x\x\9\a\k\e\m\0\v\h\l\k\w\9\2\j\u\w\6\y\h\q\p\7\0\b\g\p\r\1\b\d\f\9\8\m\d\p\7\h\p\i\d\g\v\9\w\5\v\7\q\n\y\t\3\f\p\2\s\o\r\z\p\0\v\t\x\d\5\h\z\z\1\p\g\g\4\y\2\k\y\j\8\e\z\n\1\x\4\j\s\l\f\q\a\k\z\8\7\y\m\z\f\y\e\y\0\k\l\h\g\d\1\0\8\d\t\m\2\3\p\k\o\s\m\g\l\3\6\d\q\i\n\9\7\z\g\q\i\k\b\6\s\r\8\e\f\p\v\w\y\c\0\r\w\x\r\z\h\9\8\k\q\j\4\v\e\z\t\f\d\v\7\h\d\k\9\n\9\f\u\a\8\j\9\q\t\0\2\7\j\n\z\g\x\m\o\i\d\9\2\t\0\j\j\5\m\6\1\m\5\b\p\r\9\l\x\q\w\9\n\f\z\6\z\b\3\d\k\z\w\g\v\c\g\t\h\c\s\q\j\c\5\r\g\c\h\1\3\2\s\q\b\i\v\t\4\s\z\o\u\h\j\3\k\7\5\8\a\j\8\g\h\r\0\g\6\9\d\1\8\g\0\8\1\m\1\y\y\f\4\f\3\p\6\f\6\4\w\q\4\j\n\3\o\3\3\y\g\b\a\x\8\n\i\v\c\1\f\u\7\s\y\q\n\i\6\g\5\0\9\7\5\n\5\x\z\8\2\p\v\v\v\z\j\v\p\6\9\k\p\p\t\b\p\j\8\w\i\j\k\u\a\2\g\v\c\p\g\7\v\c\v\e\4\l\6\1\1\w\b\m\t\2\8\d\w\g\f\l\k\l\a\t\t\q\f\v\2\4\v\j\e\y\w\n\j\l\v\c\x\f\1\r\c\k ]] 00:06:15.206 00:06:15.206 real 0m3.463s 00:06:15.206 user 0m1.891s 00:06:15.206 sys 0m1.320s 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:15.206 * Second test run, disabling liburing, forcing AIO 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:15.206 ************************************ 00:06:15.206 START TEST dd_flag_append_forced_aio 00:06:15.206 ************************************ 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=s2nexmh440edqou9io68lfr3449gfdfs 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=bup830b6fw49nfj83coc7heme5l65pbd 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s s2nexmh440edqou9io68lfr3449gfdfs 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s bup830b6fw49nfj83coc7heme5l65pbd 00:06:15.206 16:09:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:15.465 [2024-07-12 16:09:58.942091] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:15.465 [2024-07-12 16:09:58.942201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63071 ] 00:06:15.465 [2024-07-12 16:09:59.078237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.465 [2024-07-12 16:09:59.126022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.465 [2024-07-12 16:09:59.152119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.725  Copying: 32/32 [B] (average 31 kBps) 00:06:15.725 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ bup830b6fw49nfj83coc7heme5l65pbds2nexmh440edqou9io68lfr3449gfdfs == \b\u\p\8\3\0\b\6\f\w\4\9\n\f\j\8\3\c\o\c\7\h\e\m\e\5\l\6\5\p\b\d\s\2\n\e\x\m\h\4\4\0\e\d\q\o\u\9\i\o\6\8\l\f\r\3\4\4\9\g\f\d\f\s ]] 00:06:15.725 00:06:15.725 real 0m0.454s 00:06:15.725 user 0m0.252s 00:06:15.725 sys 0m0.085s 00:06:15.725 ************************************ 00:06:15.725 END TEST dd_flag_append_forced_aio 00:06:15.725 ************************************ 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:15.725 ************************************ 00:06:15.725 START TEST dd_flag_directory_forced_aio 00:06:15.725 ************************************ 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.725 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.725 [2024-07-12 16:09:59.430279] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:15.725 [2024-07-12 16:09:59.430366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63092 ] 00:06:15.984 [2024-07-12 16:09:59.556983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.984 [2024-07-12 16:09:59.610229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.984 [2024-07-12 16:09:59.636764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.985 [2024-07-12 16:09:59.655312] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.985 [2024-07-12 16:09:59.655364] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.985 [2024-07-12 16:09:59.655393] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.244 [2024-07-12 16:09:59.719982] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.244 16:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:16.244 [2024-07-12 16:09:59.849183] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:16.244 [2024-07-12 16:09:59.849273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:06:16.503 [2024-07-12 16:09:59.987406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.503 [2024-07-12 16:10:00.041905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.503 [2024-07-12 16:10:00.068000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.503 [2024-07-12 16:10:00.083361] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.503 [2024-07-12 16:10:00.083412] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.503 [2024-07-12 16:10:00.083440] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.503 [2024-07-12 16:10:00.138609] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.503 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:16.503 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.503 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:16.503 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:16.503 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:16.503 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.503 00:06:16.503 real 0m0.829s 00:06:16.503 user 0m0.451s 00:06:16.503 sys 0m0.171s 00:06:16.503 ************************************ 00:06:16.503 END TEST dd_flag_directory_forced_aio 00:06:16.503 ************************************ 00:06:16.504 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.504 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.763 ************************************ 00:06:16.763 START TEST dd_flag_nofollow_forced_aio 00:06:16.763 ************************************ 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.763 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.763 [2024-07-12 16:10:00.329856] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:16.763 [2024-07-12 16:10:00.329960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63130 ] 00:06:16.763 [2024-07-12 16:10:00.466364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.022 [2024-07-12 16:10:00.515705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.022 [2024-07-12 16:10:00.541332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.022 [2024-07-12 16:10:00.556751] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:17.022 [2024-07-12 16:10:00.556798] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:17.022 [2024-07-12 16:10:00.556827] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.022 [2024-07-12 16:10:00.615824] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.022 16:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.022 [2024-07-12 16:10:00.743575] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:17.022 [2024-07-12 16:10:00.743811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63139 ] 00:06:17.281 [2024-07-12 16:10:00.878705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.281 [2024-07-12 16:10:00.926059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.281 [2024-07-12 16:10:00.952276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.281 [2024-07-12 16:10:00.968586] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.281 [2024-07-12 16:10:00.968643] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.281 [2024-07-12 16:10:00.968673] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.540 [2024-07-12 16:10:01.033961] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:17.540 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.540 [2024-07-12 16:10:01.182374] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:17.540 [2024-07-12 16:10:01.182475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63147 ] 00:06:17.799 [2024-07-12 16:10:01.316352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.799 [2024-07-12 16:10:01.372122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.799 [2024-07-12 16:10:01.402005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.058  Copying: 512/512 [B] (average 500 kBps) 00:06:18.058 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ hhst2bmjj1t4d4a4cr9t14s4gk550ge7vuqgje95hwsalzewcbpbrav9iavwn0yecqkan9nrbkzld4nj03glp9tsdb8uify69ucx5gyggzxc1d9l366mfx0hrsaqk8135ax9x8vqxasjlpj00da0mbr345z8wzfl1jj29j1ktjriffccx5qrb0yawpg1j6n8jki09yq8l1dkew1o6e7lu0xpr90v7dmx84zd0lvqfddcr29ok5q4zr72mc73arin7byyhoqikjh8em1qxyg9hltv2xmec5yc1mttj5ly9kyanovv57xpr2417jliortaln73m15pmo9dz1kp8soml6ggh5fzqxah3edi6840l4bgyl3leu4olvxybhgdtgzwdombaslondh88af0dyznr8dwchko3z273i6wvc40ov7ojwrz4eybyeqj4selzyb89a28qrt5aq1c0vyvgkpxgkksnakzxe4a0xnhmsbh3mp1j1ooeo7b1xvt42bub6ro == \h\h\s\t\2\b\m\j\j\1\t\4\d\4\a\4\c\r\9\t\1\4\s\4\g\k\5\5\0\g\e\7\v\u\q\g\j\e\9\5\h\w\s\a\l\z\e\w\c\b\p\b\r\a\v\9\i\a\v\w\n\0\y\e\c\q\k\a\n\9\n\r\b\k\z\l\d\4\n\j\0\3\g\l\p\9\t\s\d\b\8\u\i\f\y\6\9\u\c\x\5\g\y\g\g\z\x\c\1\d\9\l\3\6\6\m\f\x\0\h\r\s\a\q\k\8\1\3\5\a\x\9\x\8\v\q\x\a\s\j\l\p\j\0\0\d\a\0\m\b\r\3\4\5\z\8\w\z\f\l\1\j\j\2\9\j\1\k\t\j\r\i\f\f\c\c\x\5\q\r\b\0\y\a\w\p\g\1\j\6\n\8\j\k\i\0\9\y\q\8\l\1\d\k\e\w\1\o\6\e\7\l\u\0\x\p\r\9\0\v\7\d\m\x\8\4\z\d\0\l\v\q\f\d\d\c\r\2\9\o\k\5\q\4\z\r\7\2\m\c\7\3\a\r\i\n\7\b\y\y\h\o\q\i\k\j\h\8\e\m\1\q\x\y\g\9\h\l\t\v\2\x\m\e\c\5\y\c\1\m\t\t\j\5\l\y\9\k\y\a\n\o\v\v\5\7\x\p\r\2\4\1\7\j\l\i\o\r\t\a\l\n\7\3\m\1\5\p\m\o\9\d\z\1\k\p\8\s\o\m\l\6\g\g\h\5\f\z\q\x\a\h\3\e\d\i\6\8\4\0\l\4\b\g\y\l\3\l\e\u\4\o\l\v\x\y\b\h\g\d\t\g\z\w\d\o\m\b\a\s\l\o\n\d\h\8\8\a\f\0\d\y\z\n\r\8\d\w\c\h\k\o\3\z\2\7\3\i\6\w\v\c\4\0\o\v\7\o\j\w\r\z\4\e\y\b\y\e\q\j\4\s\e\l\z\y\b\8\9\a\2\8\q\r\t\5\a\q\1\c\0\v\y\v\g\k\p\x\g\k\k\s\n\a\k\z\x\e\4\a\0\x\n\h\m\s\b\h\3\m\p\1\j\1\o\o\e\o\7\b\1\x\v\t\4\2\b\u\b\6\r\o ]] 00:06:18.058 00:06:18.058 real 0m1.316s 00:06:18.058 user 0m0.702s 00:06:18.058 sys 0m0.285s 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.058 ************************************ 00:06:18.058 END TEST dd_flag_nofollow_forced_aio 00:06:18.058 ************************************ 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 ************************************ 00:06:18.058 START TEST dd_flag_noatime_forced_aio 00:06:18.058 ************************************ 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720800601 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720800601 00:06:18.058 16:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:18.995 16:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.995 [2024-07-12 16:10:02.713951] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:18.995 [2024-07-12 16:10:02.714053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63182 ] 00:06:19.253 [2024-07-12 16:10:02.856549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.253 [2024-07-12 16:10:02.925374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.253 [2024-07-12 16:10:02.956462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.512  Copying: 512/512 [B] (average 500 kBps) 00:06:19.512 00:06:19.512 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.512 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720800601 )) 00:06:19.512 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.512 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720800601 )) 00:06:19.512 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.512 [2024-07-12 16:10:03.219980] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:19.512 [2024-07-12 16:10:03.220075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63199 ] 00:06:19.770 [2024-07-12 16:10:03.355848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.770 [2024-07-12 16:10:03.407428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.770 [2024-07-12 16:10:03.433707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.028  Copying: 512/512 [B] (average 500 kBps) 00:06:20.028 00:06:20.028 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.028 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720800603 )) 00:06:20.028 00:06:20.028 real 0m1.990s 00:06:20.028 user 0m0.539s 00:06:20.028 sys 0m0.211s 00:06:20.028 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.028 ************************************ 00:06:20.028 END TEST dd_flag_noatime_forced_aio 00:06:20.028 ************************************ 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:20.029 ************************************ 00:06:20.029 START TEST dd_flags_misc_forced_aio 00:06:20.029 ************************************ 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:20.029 16:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:20.029 [2024-07-12 16:10:03.737993] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:20.029 [2024-07-12 16:10:03.738102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63220 ] 00:06:20.288 [2024-07-12 16:10:03.872551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.288 [2024-07-12 16:10:03.926199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.288 [2024-07-12 16:10:03.957656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.547  Copying: 512/512 [B] (average 500 kBps) 00:06:20.547 00:06:20.547 16:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ege67d1punio1n0gklhi61gx8rq914ekrrh6n7ndx6crmoqnnvts0w5hhjap3yqjrhdxmq2z4wkjym6ypy4lifkijl5jb4204ouhzz44wxagxxyvabloy6k6i9ey5mt0gfgd01ltmkuby2h3pmi836cdof2crhm6594fp5ggz31rn46of9xohn8ubqk7u5swrjbo8plusokfp3wu43ke1e2tt97h407x96rkb1i4sirk5r7l47otag7n2m55ah2cussa5j3st0qnajhii705grngryinn4athfg4h7qvqjn43rydwkx27qa4zgoyxjy1cz37u3ikqjcsu5zpds1gpmcxn5zsj4eojwxw7l6zdxoiy0ju9ycfwq9azmzfmvtq2ns5cdbfwc08h9r8h1xlvbsmn304r7n3cyi2bmzgscl9pfvnafi6glxar329s9ry665ta7uzxw76ngitbfn20id022lv1eghww8o58ac13efx3wfkm01lx0ljsflj2wg == \e\g\e\6\7\d\1\p\u\n\i\o\1\n\0\g\k\l\h\i\6\1\g\x\8\r\q\9\1\4\e\k\r\r\h\6\n\7\n\d\x\6\c\r\m\o\q\n\n\v\t\s\0\w\5\h\h\j\a\p\3\y\q\j\r\h\d\x\m\q\2\z\4\w\k\j\y\m\6\y\p\y\4\l\i\f\k\i\j\l\5\j\b\4\2\0\4\o\u\h\z\z\4\4\w\x\a\g\x\x\y\v\a\b\l\o\y\6\k\6\i\9\e\y\5\m\t\0\g\f\g\d\0\1\l\t\m\k\u\b\y\2\h\3\p\m\i\8\3\6\c\d\o\f\2\c\r\h\m\6\5\9\4\f\p\5\g\g\z\3\1\r\n\4\6\o\f\9\x\o\h\n\8\u\b\q\k\7\u\5\s\w\r\j\b\o\8\p\l\u\s\o\k\f\p\3\w\u\4\3\k\e\1\e\2\t\t\9\7\h\4\0\7\x\9\6\r\k\b\1\i\4\s\i\r\k\5\r\7\l\4\7\o\t\a\g\7\n\2\m\5\5\a\h\2\c\u\s\s\a\5\j\3\s\t\0\q\n\a\j\h\i\i\7\0\5\g\r\n\g\r\y\i\n\n\4\a\t\h\f\g\4\h\7\q\v\q\j\n\4\3\r\y\d\w\k\x\2\7\q\a\4\z\g\o\y\x\j\y\1\c\z\3\7\u\3\i\k\q\j\c\s\u\5\z\p\d\s\1\g\p\m\c\x\n\5\z\s\j\4\e\o\j\w\x\w\7\l\6\z\d\x\o\i\y\0\j\u\9\y\c\f\w\q\9\a\z\m\z\f\m\v\t\q\2\n\s\5\c\d\b\f\w\c\0\8\h\9\r\8\h\1\x\l\v\b\s\m\n\3\0\4\r\7\n\3\c\y\i\2\b\m\z\g\s\c\l\9\p\f\v\n\a\f\i\6\g\l\x\a\r\3\2\9\s\9\r\y\6\6\5\t\a\7\u\z\x\w\7\6\n\g\i\t\b\f\n\2\0\i\d\0\2\2\l\v\1\e\g\h\w\w\8\o\5\8\a\c\1\3\e\f\x\3\w\f\k\m\0\1\l\x\0\l\j\s\f\l\j\2\w\g ]] 00:06:20.547 16:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:20.547 16:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:20.547 [2024-07-12 16:10:04.200684] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:20.547 [2024-07-12 16:10:04.200833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:06:20.806 [2024-07-12 16:10:04.338348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.806 [2024-07-12 16:10:04.386219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.806 [2024-07-12 16:10:04.411766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.065  Copying: 512/512 [B] (average 500 kBps) 00:06:21.065 00:06:21.066 16:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ege67d1punio1n0gklhi61gx8rq914ekrrh6n7ndx6crmoqnnvts0w5hhjap3yqjrhdxmq2z4wkjym6ypy4lifkijl5jb4204ouhzz44wxagxxyvabloy6k6i9ey5mt0gfgd01ltmkuby2h3pmi836cdof2crhm6594fp5ggz31rn46of9xohn8ubqk7u5swrjbo8plusokfp3wu43ke1e2tt97h407x96rkb1i4sirk5r7l47otag7n2m55ah2cussa5j3st0qnajhii705grngryinn4athfg4h7qvqjn43rydwkx27qa4zgoyxjy1cz37u3ikqjcsu5zpds1gpmcxn5zsj4eojwxw7l6zdxoiy0ju9ycfwq9azmzfmvtq2ns5cdbfwc08h9r8h1xlvbsmn304r7n3cyi2bmzgscl9pfvnafi6glxar329s9ry665ta7uzxw76ngitbfn20id022lv1eghww8o58ac13efx3wfkm01lx0ljsflj2wg == \e\g\e\6\7\d\1\p\u\n\i\o\1\n\0\g\k\l\h\i\6\1\g\x\8\r\q\9\1\4\e\k\r\r\h\6\n\7\n\d\x\6\c\r\m\o\q\n\n\v\t\s\0\w\5\h\h\j\a\p\3\y\q\j\r\h\d\x\m\q\2\z\4\w\k\j\y\m\6\y\p\y\4\l\i\f\k\i\j\l\5\j\b\4\2\0\4\o\u\h\z\z\4\4\w\x\a\g\x\x\y\v\a\b\l\o\y\6\k\6\i\9\e\y\5\m\t\0\g\f\g\d\0\1\l\t\m\k\u\b\y\2\h\3\p\m\i\8\3\6\c\d\o\f\2\c\r\h\m\6\5\9\4\f\p\5\g\g\z\3\1\r\n\4\6\o\f\9\x\o\h\n\8\u\b\q\k\7\u\5\s\w\r\j\b\o\8\p\l\u\s\o\k\f\p\3\w\u\4\3\k\e\1\e\2\t\t\9\7\h\4\0\7\x\9\6\r\k\b\1\i\4\s\i\r\k\5\r\7\l\4\7\o\t\a\g\7\n\2\m\5\5\a\h\2\c\u\s\s\a\5\j\3\s\t\0\q\n\a\j\h\i\i\7\0\5\g\r\n\g\r\y\i\n\n\4\a\t\h\f\g\4\h\7\q\v\q\j\n\4\3\r\y\d\w\k\x\2\7\q\a\4\z\g\o\y\x\j\y\1\c\z\3\7\u\3\i\k\q\j\c\s\u\5\z\p\d\s\1\g\p\m\c\x\n\5\z\s\j\4\e\o\j\w\x\w\7\l\6\z\d\x\o\i\y\0\j\u\9\y\c\f\w\q\9\a\z\m\z\f\m\v\t\q\2\n\s\5\c\d\b\f\w\c\0\8\h\9\r\8\h\1\x\l\v\b\s\m\n\3\0\4\r\7\n\3\c\y\i\2\b\m\z\g\s\c\l\9\p\f\v\n\a\f\i\6\g\l\x\a\r\3\2\9\s\9\r\y\6\6\5\t\a\7\u\z\x\w\7\6\n\g\i\t\b\f\n\2\0\i\d\0\2\2\l\v\1\e\g\h\w\w\8\o\5\8\a\c\1\3\e\f\x\3\w\f\k\m\0\1\l\x\0\l\j\s\f\l\j\2\w\g ]] 00:06:21.066 16:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.066 16:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:21.066 [2024-07-12 16:10:04.639377] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:21.066 [2024-07-12 16:10:04.639474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63235 ] 00:06:21.066 [2024-07-12 16:10:04.773461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.325 [2024-07-12 16:10:04.826040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.325 [2024-07-12 16:10:04.853745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.325  Copying: 512/512 [B] (average 125 kBps) 00:06:21.325 00:06:21.325 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ege67d1punio1n0gklhi61gx8rq914ekrrh6n7ndx6crmoqnnvts0w5hhjap3yqjrhdxmq2z4wkjym6ypy4lifkijl5jb4204ouhzz44wxagxxyvabloy6k6i9ey5mt0gfgd01ltmkuby2h3pmi836cdof2crhm6594fp5ggz31rn46of9xohn8ubqk7u5swrjbo8plusokfp3wu43ke1e2tt97h407x96rkb1i4sirk5r7l47otag7n2m55ah2cussa5j3st0qnajhii705grngryinn4athfg4h7qvqjn43rydwkx27qa4zgoyxjy1cz37u3ikqjcsu5zpds1gpmcxn5zsj4eojwxw7l6zdxoiy0ju9ycfwq9azmzfmvtq2ns5cdbfwc08h9r8h1xlvbsmn304r7n3cyi2bmzgscl9pfvnafi6glxar329s9ry665ta7uzxw76ngitbfn20id022lv1eghww8o58ac13efx3wfkm01lx0ljsflj2wg == \e\g\e\6\7\d\1\p\u\n\i\o\1\n\0\g\k\l\h\i\6\1\g\x\8\r\q\9\1\4\e\k\r\r\h\6\n\7\n\d\x\6\c\r\m\o\q\n\n\v\t\s\0\w\5\h\h\j\a\p\3\y\q\j\r\h\d\x\m\q\2\z\4\w\k\j\y\m\6\y\p\y\4\l\i\f\k\i\j\l\5\j\b\4\2\0\4\o\u\h\z\z\4\4\w\x\a\g\x\x\y\v\a\b\l\o\y\6\k\6\i\9\e\y\5\m\t\0\g\f\g\d\0\1\l\t\m\k\u\b\y\2\h\3\p\m\i\8\3\6\c\d\o\f\2\c\r\h\m\6\5\9\4\f\p\5\g\g\z\3\1\r\n\4\6\o\f\9\x\o\h\n\8\u\b\q\k\7\u\5\s\w\r\j\b\o\8\p\l\u\s\o\k\f\p\3\w\u\4\3\k\e\1\e\2\t\t\9\7\h\4\0\7\x\9\6\r\k\b\1\i\4\s\i\r\k\5\r\7\l\4\7\o\t\a\g\7\n\2\m\5\5\a\h\2\c\u\s\s\a\5\j\3\s\t\0\q\n\a\j\h\i\i\7\0\5\g\r\n\g\r\y\i\n\n\4\a\t\h\f\g\4\h\7\q\v\q\j\n\4\3\r\y\d\w\k\x\2\7\q\a\4\z\g\o\y\x\j\y\1\c\z\3\7\u\3\i\k\q\j\c\s\u\5\z\p\d\s\1\g\p\m\c\x\n\5\z\s\j\4\e\o\j\w\x\w\7\l\6\z\d\x\o\i\y\0\j\u\9\y\c\f\w\q\9\a\z\m\z\f\m\v\t\q\2\n\s\5\c\d\b\f\w\c\0\8\h\9\r\8\h\1\x\l\v\b\s\m\n\3\0\4\r\7\n\3\c\y\i\2\b\m\z\g\s\c\l\9\p\f\v\n\a\f\i\6\g\l\x\a\r\3\2\9\s\9\r\y\6\6\5\t\a\7\u\z\x\w\7\6\n\g\i\t\b\f\n\2\0\i\d\0\2\2\l\v\1\e\g\h\w\w\8\o\5\8\a\c\1\3\e\f\x\3\w\f\k\m\0\1\l\x\0\l\j\s\f\l\j\2\w\g ]] 00:06:21.325 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.325 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:21.585 [2024-07-12 16:10:05.072383] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:21.585 [2024-07-12 16:10:05.072466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:06:21.585 [2024-07-12 16:10:05.194830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.585 [2024-07-12 16:10:05.247531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.585 [2024-07-12 16:10:05.276727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.844  Copying: 512/512 [B] (average 166 kBps) 00:06:21.844 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ege67d1punio1n0gklhi61gx8rq914ekrrh6n7ndx6crmoqnnvts0w5hhjap3yqjrhdxmq2z4wkjym6ypy4lifkijl5jb4204ouhzz44wxagxxyvabloy6k6i9ey5mt0gfgd01ltmkuby2h3pmi836cdof2crhm6594fp5ggz31rn46of9xohn8ubqk7u5swrjbo8plusokfp3wu43ke1e2tt97h407x96rkb1i4sirk5r7l47otag7n2m55ah2cussa5j3st0qnajhii705grngryinn4athfg4h7qvqjn43rydwkx27qa4zgoyxjy1cz37u3ikqjcsu5zpds1gpmcxn5zsj4eojwxw7l6zdxoiy0ju9ycfwq9azmzfmvtq2ns5cdbfwc08h9r8h1xlvbsmn304r7n3cyi2bmzgscl9pfvnafi6glxar329s9ry665ta7uzxw76ngitbfn20id022lv1eghww8o58ac13efx3wfkm01lx0ljsflj2wg == \e\g\e\6\7\d\1\p\u\n\i\o\1\n\0\g\k\l\h\i\6\1\g\x\8\r\q\9\1\4\e\k\r\r\h\6\n\7\n\d\x\6\c\r\m\o\q\n\n\v\t\s\0\w\5\h\h\j\a\p\3\y\q\j\r\h\d\x\m\q\2\z\4\w\k\j\y\m\6\y\p\y\4\l\i\f\k\i\j\l\5\j\b\4\2\0\4\o\u\h\z\z\4\4\w\x\a\g\x\x\y\v\a\b\l\o\y\6\k\6\i\9\e\y\5\m\t\0\g\f\g\d\0\1\l\t\m\k\u\b\y\2\h\3\p\m\i\8\3\6\c\d\o\f\2\c\r\h\m\6\5\9\4\f\p\5\g\g\z\3\1\r\n\4\6\o\f\9\x\o\h\n\8\u\b\q\k\7\u\5\s\w\r\j\b\o\8\p\l\u\s\o\k\f\p\3\w\u\4\3\k\e\1\e\2\t\t\9\7\h\4\0\7\x\9\6\r\k\b\1\i\4\s\i\r\k\5\r\7\l\4\7\o\t\a\g\7\n\2\m\5\5\a\h\2\c\u\s\s\a\5\j\3\s\t\0\q\n\a\j\h\i\i\7\0\5\g\r\n\g\r\y\i\n\n\4\a\t\h\f\g\4\h\7\q\v\q\j\n\4\3\r\y\d\w\k\x\2\7\q\a\4\z\g\o\y\x\j\y\1\c\z\3\7\u\3\i\k\q\j\c\s\u\5\z\p\d\s\1\g\p\m\c\x\n\5\z\s\j\4\e\o\j\w\x\w\7\l\6\z\d\x\o\i\y\0\j\u\9\y\c\f\w\q\9\a\z\m\z\f\m\v\t\q\2\n\s\5\c\d\b\f\w\c\0\8\h\9\r\8\h\1\x\l\v\b\s\m\n\3\0\4\r\7\n\3\c\y\i\2\b\m\z\g\s\c\l\9\p\f\v\n\a\f\i\6\g\l\x\a\r\3\2\9\s\9\r\y\6\6\5\t\a\7\u\z\x\w\7\6\n\g\i\t\b\f\n\2\0\i\d\0\2\2\l\v\1\e\g\h\w\w\8\o\5\8\a\c\1\3\e\f\x\3\w\f\k\m\0\1\l\x\0\l\j\s\f\l\j\2\w\g ]] 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.845 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:21.845 [2024-07-12 16:10:05.523890] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:21.845 [2024-07-12 16:10:05.523994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:06:22.104 [2024-07-12 16:10:05.659711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.104 [2024-07-12 16:10:05.712097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.104 [2024-07-12 16:10:05.737914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.363  Copying: 512/512 [B] (average 500 kBps) 00:06:22.363 00:06:22.363 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lru1adtsavrcx0dgoovkjjrk9jn7hbp9hwu785dkyx12p5p78oo3uuie1u9tyyj35ky2gvcjwi3cj22xzxcufm677n9rcarahvzx5dbvemjuhflwaeg8aoj8akzu21cp85btstplc91rrgmqa25mdt2yt42tgqn7lf6ot2yiv9lbu52qnckocb8wynwyns5pwshk2h10kv4hn07uqhoicjqxk42sfum71bjl2i3u26uxi92ir1r9jmh6w6qol8jgmayslza67z0jmmkpig9aojgw3xblw9czo0pfvwvq8ua0511zmlmi890bvqwv7llj6jvixhwy2ksd7jyi60dj8isddr73j7iu1h3dts6zl1vcyq9s220cq9ieraj0i3zjdb1p7bj65cej26rzug2m9risagbevw2sfdes8k0bzzqiywukvmykmtq3lchkuussr3wfx4aopo91rzomdzwk4pvt2hs6kmr2fvtkaao3bewp8vuet5b5h4euq88ecso0 == \l\r\u\1\a\d\t\s\a\v\r\c\x\0\d\g\o\o\v\k\j\j\r\k\9\j\n\7\h\b\p\9\h\w\u\7\8\5\d\k\y\x\1\2\p\5\p\7\8\o\o\3\u\u\i\e\1\u\9\t\y\y\j\3\5\k\y\2\g\v\c\j\w\i\3\c\j\2\2\x\z\x\c\u\f\m\6\7\7\n\9\r\c\a\r\a\h\v\z\x\5\d\b\v\e\m\j\u\h\f\l\w\a\e\g\8\a\o\j\8\a\k\z\u\2\1\c\p\8\5\b\t\s\t\p\l\c\9\1\r\r\g\m\q\a\2\5\m\d\t\2\y\t\4\2\t\g\q\n\7\l\f\6\o\t\2\y\i\v\9\l\b\u\5\2\q\n\c\k\o\c\b\8\w\y\n\w\y\n\s\5\p\w\s\h\k\2\h\1\0\k\v\4\h\n\0\7\u\q\h\o\i\c\j\q\x\k\4\2\s\f\u\m\7\1\b\j\l\2\i\3\u\2\6\u\x\i\9\2\i\r\1\r\9\j\m\h\6\w\6\q\o\l\8\j\g\m\a\y\s\l\z\a\6\7\z\0\j\m\m\k\p\i\g\9\a\o\j\g\w\3\x\b\l\w\9\c\z\o\0\p\f\v\w\v\q\8\u\a\0\5\1\1\z\m\l\m\i\8\9\0\b\v\q\w\v\7\l\l\j\6\j\v\i\x\h\w\y\2\k\s\d\7\j\y\i\6\0\d\j\8\i\s\d\d\r\7\3\j\7\i\u\1\h\3\d\t\s\6\z\l\1\v\c\y\q\9\s\2\2\0\c\q\9\i\e\r\a\j\0\i\3\z\j\d\b\1\p\7\b\j\6\5\c\e\j\2\6\r\z\u\g\2\m\9\r\i\s\a\g\b\e\v\w\2\s\f\d\e\s\8\k\0\b\z\z\q\i\y\w\u\k\v\m\y\k\m\t\q\3\l\c\h\k\u\u\s\s\r\3\w\f\x\4\a\o\p\o\9\1\r\z\o\m\d\z\w\k\4\p\v\t\2\h\s\6\k\m\r\2\f\v\t\k\a\a\o\3\b\e\w\p\8\v\u\e\t\5\b\5\h\4\e\u\q\8\8\e\c\s\o\0 ]] 00:06:22.363 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.363 16:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.363 [2024-07-12 16:10:05.959958] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:22.363 [2024-07-12 16:10:05.960053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63263 ] 00:06:22.622 [2024-07-12 16:10:06.089643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.622 [2024-07-12 16:10:06.138259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.622 [2024-07-12 16:10:06.164060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.622  Copying: 512/512 [B] (average 500 kBps) 00:06:22.622 00:06:22.622 16:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lru1adtsavrcx0dgoovkjjrk9jn7hbp9hwu785dkyx12p5p78oo3uuie1u9tyyj35ky2gvcjwi3cj22xzxcufm677n9rcarahvzx5dbvemjuhflwaeg8aoj8akzu21cp85btstplc91rrgmqa25mdt2yt42tgqn7lf6ot2yiv9lbu52qnckocb8wynwyns5pwshk2h10kv4hn07uqhoicjqxk42sfum71bjl2i3u26uxi92ir1r9jmh6w6qol8jgmayslza67z0jmmkpig9aojgw3xblw9czo0pfvwvq8ua0511zmlmi890bvqwv7llj6jvixhwy2ksd7jyi60dj8isddr73j7iu1h3dts6zl1vcyq9s220cq9ieraj0i3zjdb1p7bj65cej26rzug2m9risagbevw2sfdes8k0bzzqiywukvmykmtq3lchkuussr3wfx4aopo91rzomdzwk4pvt2hs6kmr2fvtkaao3bewp8vuet5b5h4euq88ecso0 == \l\r\u\1\a\d\t\s\a\v\r\c\x\0\d\g\o\o\v\k\j\j\r\k\9\j\n\7\h\b\p\9\h\w\u\7\8\5\d\k\y\x\1\2\p\5\p\7\8\o\o\3\u\u\i\e\1\u\9\t\y\y\j\3\5\k\y\2\g\v\c\j\w\i\3\c\j\2\2\x\z\x\c\u\f\m\6\7\7\n\9\r\c\a\r\a\h\v\z\x\5\d\b\v\e\m\j\u\h\f\l\w\a\e\g\8\a\o\j\8\a\k\z\u\2\1\c\p\8\5\b\t\s\t\p\l\c\9\1\r\r\g\m\q\a\2\5\m\d\t\2\y\t\4\2\t\g\q\n\7\l\f\6\o\t\2\y\i\v\9\l\b\u\5\2\q\n\c\k\o\c\b\8\w\y\n\w\y\n\s\5\p\w\s\h\k\2\h\1\0\k\v\4\h\n\0\7\u\q\h\o\i\c\j\q\x\k\4\2\s\f\u\m\7\1\b\j\l\2\i\3\u\2\6\u\x\i\9\2\i\r\1\r\9\j\m\h\6\w\6\q\o\l\8\j\g\m\a\y\s\l\z\a\6\7\z\0\j\m\m\k\p\i\g\9\a\o\j\g\w\3\x\b\l\w\9\c\z\o\0\p\f\v\w\v\q\8\u\a\0\5\1\1\z\m\l\m\i\8\9\0\b\v\q\w\v\7\l\l\j\6\j\v\i\x\h\w\y\2\k\s\d\7\j\y\i\6\0\d\j\8\i\s\d\d\r\7\3\j\7\i\u\1\h\3\d\t\s\6\z\l\1\v\c\y\q\9\s\2\2\0\c\q\9\i\e\r\a\j\0\i\3\z\j\d\b\1\p\7\b\j\6\5\c\e\j\2\6\r\z\u\g\2\m\9\r\i\s\a\g\b\e\v\w\2\s\f\d\e\s\8\k\0\b\z\z\q\i\y\w\u\k\v\m\y\k\m\t\q\3\l\c\h\k\u\u\s\s\r\3\w\f\x\4\a\o\p\o\9\1\r\z\o\m\d\z\w\k\4\p\v\t\2\h\s\6\k\m\r\2\f\v\t\k\a\a\o\3\b\e\w\p\8\v\u\e\t\5\b\5\h\4\e\u\q\8\8\e\c\s\o\0 ]] 00:06:22.622 16:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.622 16:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:22.882 [2024-07-12 16:10:06.395156] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:22.882 [2024-07-12 16:10:06.395256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63265 ] 00:06:22.882 [2024-07-12 16:10:06.530552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.882 [2024-07-12 16:10:06.577974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.882 [2024-07-12 16:10:06.603649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.141  Copying: 512/512 [B] (average 250 kBps) 00:06:23.141 00:06:23.141 16:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lru1adtsavrcx0dgoovkjjrk9jn7hbp9hwu785dkyx12p5p78oo3uuie1u9tyyj35ky2gvcjwi3cj22xzxcufm677n9rcarahvzx5dbvemjuhflwaeg8aoj8akzu21cp85btstplc91rrgmqa25mdt2yt42tgqn7lf6ot2yiv9lbu52qnckocb8wynwyns5pwshk2h10kv4hn07uqhoicjqxk42sfum71bjl2i3u26uxi92ir1r9jmh6w6qol8jgmayslza67z0jmmkpig9aojgw3xblw9czo0pfvwvq8ua0511zmlmi890bvqwv7llj6jvixhwy2ksd7jyi60dj8isddr73j7iu1h3dts6zl1vcyq9s220cq9ieraj0i3zjdb1p7bj65cej26rzug2m9risagbevw2sfdes8k0bzzqiywukvmykmtq3lchkuussr3wfx4aopo91rzomdzwk4pvt2hs6kmr2fvtkaao3bewp8vuet5b5h4euq88ecso0 == \l\r\u\1\a\d\t\s\a\v\r\c\x\0\d\g\o\o\v\k\j\j\r\k\9\j\n\7\h\b\p\9\h\w\u\7\8\5\d\k\y\x\1\2\p\5\p\7\8\o\o\3\u\u\i\e\1\u\9\t\y\y\j\3\5\k\y\2\g\v\c\j\w\i\3\c\j\2\2\x\z\x\c\u\f\m\6\7\7\n\9\r\c\a\r\a\h\v\z\x\5\d\b\v\e\m\j\u\h\f\l\w\a\e\g\8\a\o\j\8\a\k\z\u\2\1\c\p\8\5\b\t\s\t\p\l\c\9\1\r\r\g\m\q\a\2\5\m\d\t\2\y\t\4\2\t\g\q\n\7\l\f\6\o\t\2\y\i\v\9\l\b\u\5\2\q\n\c\k\o\c\b\8\w\y\n\w\y\n\s\5\p\w\s\h\k\2\h\1\0\k\v\4\h\n\0\7\u\q\h\o\i\c\j\q\x\k\4\2\s\f\u\m\7\1\b\j\l\2\i\3\u\2\6\u\x\i\9\2\i\r\1\r\9\j\m\h\6\w\6\q\o\l\8\j\g\m\a\y\s\l\z\a\6\7\z\0\j\m\m\k\p\i\g\9\a\o\j\g\w\3\x\b\l\w\9\c\z\o\0\p\f\v\w\v\q\8\u\a\0\5\1\1\z\m\l\m\i\8\9\0\b\v\q\w\v\7\l\l\j\6\j\v\i\x\h\w\y\2\k\s\d\7\j\y\i\6\0\d\j\8\i\s\d\d\r\7\3\j\7\i\u\1\h\3\d\t\s\6\z\l\1\v\c\y\q\9\s\2\2\0\c\q\9\i\e\r\a\j\0\i\3\z\j\d\b\1\p\7\b\j\6\5\c\e\j\2\6\r\z\u\g\2\m\9\r\i\s\a\g\b\e\v\w\2\s\f\d\e\s\8\k\0\b\z\z\q\i\y\w\u\k\v\m\y\k\m\t\q\3\l\c\h\k\u\u\s\s\r\3\w\f\x\4\a\o\p\o\9\1\r\z\o\m\d\z\w\k\4\p\v\t\2\h\s\6\k\m\r\2\f\v\t\k\a\a\o\3\b\e\w\p\8\v\u\e\t\5\b\5\h\4\e\u\q\8\8\e\c\s\o\0 ]] 00:06:23.141 16:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.141 16:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.141 [2024-07-12 16:10:06.849945] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:23.141 [2024-07-12 16:10:06.850040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63267 ] 00:06:23.400 [2024-07-12 16:10:06.982728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.400 [2024-07-12 16:10:07.029610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.400 [2024-07-12 16:10:07.056289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.660  Copying: 512/512 [B] (average 166 kBps) 00:06:23.660 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lru1adtsavrcx0dgoovkjjrk9jn7hbp9hwu785dkyx12p5p78oo3uuie1u9tyyj35ky2gvcjwi3cj22xzxcufm677n9rcarahvzx5dbvemjuhflwaeg8aoj8akzu21cp85btstplc91rrgmqa25mdt2yt42tgqn7lf6ot2yiv9lbu52qnckocb8wynwyns5pwshk2h10kv4hn07uqhoicjqxk42sfum71bjl2i3u26uxi92ir1r9jmh6w6qol8jgmayslza67z0jmmkpig9aojgw3xblw9czo0pfvwvq8ua0511zmlmi890bvqwv7llj6jvixhwy2ksd7jyi60dj8isddr73j7iu1h3dts6zl1vcyq9s220cq9ieraj0i3zjdb1p7bj65cej26rzug2m9risagbevw2sfdes8k0bzzqiywukvmykmtq3lchkuussr3wfx4aopo91rzomdzwk4pvt2hs6kmr2fvtkaao3bewp8vuet5b5h4euq88ecso0 == \l\r\u\1\a\d\t\s\a\v\r\c\x\0\d\g\o\o\v\k\j\j\r\k\9\j\n\7\h\b\p\9\h\w\u\7\8\5\d\k\y\x\1\2\p\5\p\7\8\o\o\3\u\u\i\e\1\u\9\t\y\y\j\3\5\k\y\2\g\v\c\j\w\i\3\c\j\2\2\x\z\x\c\u\f\m\6\7\7\n\9\r\c\a\r\a\h\v\z\x\5\d\b\v\e\m\j\u\h\f\l\w\a\e\g\8\a\o\j\8\a\k\z\u\2\1\c\p\8\5\b\t\s\t\p\l\c\9\1\r\r\g\m\q\a\2\5\m\d\t\2\y\t\4\2\t\g\q\n\7\l\f\6\o\t\2\y\i\v\9\l\b\u\5\2\q\n\c\k\o\c\b\8\w\y\n\w\y\n\s\5\p\w\s\h\k\2\h\1\0\k\v\4\h\n\0\7\u\q\h\o\i\c\j\q\x\k\4\2\s\f\u\m\7\1\b\j\l\2\i\3\u\2\6\u\x\i\9\2\i\r\1\r\9\j\m\h\6\w\6\q\o\l\8\j\g\m\a\y\s\l\z\a\6\7\z\0\j\m\m\k\p\i\g\9\a\o\j\g\w\3\x\b\l\w\9\c\z\o\0\p\f\v\w\v\q\8\u\a\0\5\1\1\z\m\l\m\i\8\9\0\b\v\q\w\v\7\l\l\j\6\j\v\i\x\h\w\y\2\k\s\d\7\j\y\i\6\0\d\j\8\i\s\d\d\r\7\3\j\7\i\u\1\h\3\d\t\s\6\z\l\1\v\c\y\q\9\s\2\2\0\c\q\9\i\e\r\a\j\0\i\3\z\j\d\b\1\p\7\b\j\6\5\c\e\j\2\6\r\z\u\g\2\m\9\r\i\s\a\g\b\e\v\w\2\s\f\d\e\s\8\k\0\b\z\z\q\i\y\w\u\k\v\m\y\k\m\t\q\3\l\c\h\k\u\u\s\s\r\3\w\f\x\4\a\o\p\o\9\1\r\z\o\m\d\z\w\k\4\p\v\t\2\h\s\6\k\m\r\2\f\v\t\k\a\a\o\3\b\e\w\p\8\v\u\e\t\5\b\5\h\4\e\u\q\8\8\e\c\s\o\0 ]] 00:06:23.660 00:06:23.660 real 0m3.560s 00:06:23.660 user 0m1.921s 00:06:23.660 sys 0m0.669s 00:06:23.660 ************************************ 00:06:23.660 END TEST dd_flags_misc_forced_aio 00:06:23.660 ************************************ 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.660 00:06:23.660 real 0m16.714s 00:06:23.660 user 0m7.841s 00:06:23.660 sys 0m4.154s 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.660 ************************************ 00:06:23.660 16:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.660 END TEST spdk_dd_posix 00:06:23.660 ************************************ 00:06:23.660 16:10:07 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:23.660 16:10:07 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:23.660 16:10:07 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.660 16:10:07 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.660 16:10:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:23.660 ************************************ 00:06:23.660 START TEST spdk_dd_malloc 00:06:23.660 ************************************ 00:06:23.660 16:10:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:23.920 * Looking for test storage... 00:06:23.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:23.920 ************************************ 00:06:23.920 START TEST dd_malloc_copy 00:06:23.920 ************************************ 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:23.920 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:23.921 16:10:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.921 [2024-07-12 16:10:07.491906] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:23.921 [2024-07-12 16:10:07.492017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63341 ] 00:06:23.921 { 00:06:23.921 "subsystems": [ 00:06:23.921 { 00:06:23.921 "subsystem": "bdev", 00:06:23.921 "config": [ 00:06:23.921 { 00:06:23.921 "params": { 00:06:23.921 "block_size": 512, 00:06:23.921 "num_blocks": 1048576, 00:06:23.921 "name": "malloc0" 00:06:23.921 }, 00:06:23.921 "method": "bdev_malloc_create" 00:06:23.921 }, 00:06:23.921 { 00:06:23.921 "params": { 00:06:23.921 "block_size": 512, 00:06:23.921 "num_blocks": 1048576, 00:06:23.921 "name": "malloc1" 00:06:23.921 }, 00:06:23.921 "method": "bdev_malloc_create" 00:06:23.921 }, 00:06:23.921 { 00:06:23.921 "method": "bdev_wait_for_examine" 00:06:23.921 } 00:06:23.921 ] 00:06:23.921 } 00:06:23.921 ] 00:06:23.921 } 00:06:23.921 [2024-07-12 16:10:07.627901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.180 [2024-07-12 16:10:07.679496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.180 [2024-07-12 16:10:07.707245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.984  Copying: 232/512 [MB] (232 MBps) Copying: 463/512 [MB] (230 MBps) Copying: 512/512 [MB] (average 230 MBps) 00:06:26.984 00:06:26.984 16:10:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:26.984 16:10:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:26.984 16:10:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:26.984 16:10:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 { 00:06:26.984 "subsystems": [ 00:06:26.984 { 00:06:26.984 "subsystem": "bdev", 00:06:26.984 "config": [ 00:06:26.984 { 00:06:26.984 "params": { 00:06:26.984 "block_size": 512, 00:06:26.984 "num_blocks": 1048576, 00:06:26.984 "name": "malloc0" 00:06:26.984 }, 00:06:26.984 "method": "bdev_malloc_create" 00:06:26.984 }, 00:06:26.984 { 00:06:26.984 "params": { 00:06:26.984 "block_size": 512, 00:06:26.984 "num_blocks": 1048576, 00:06:26.984 "name": "malloc1" 00:06:26.984 }, 00:06:26.984 "method": "bdev_malloc_create" 00:06:26.984 }, 00:06:26.984 { 00:06:26.984 "method": "bdev_wait_for_examine" 00:06:26.984 } 00:06:26.984 ] 00:06:26.984 } 00:06:26.984 ] 00:06:26.984 } 00:06:26.984 [2024-07-12 16:10:10.486792] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:26.984 [2024-07-12 16:10:10.486916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:06:26.984 [2024-07-12 16:10:10.626101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.984 [2024-07-12 16:10:10.676349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.984 [2024-07-12 16:10:10.703463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.837  Copying: 199/512 [MB] (199 MBps) Copying: 439/512 [MB] (239 MBps) Copying: 512/512 [MB] (average 222 MBps) 00:06:29.837 00:06:29.837 00:06:29.837 real 0m6.080s 00:06:29.837 user 0m5.475s 00:06:29.837 sys 0m0.450s 00:06:29.837 ************************************ 00:06:29.837 END TEST dd_malloc_copy 00:06:29.837 ************************************ 00:06:29.837 16:10:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.837 16:10:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.837 16:10:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:06:29.837 00:06:29.837 real 0m6.219s 00:06:29.837 user 0m5.531s 00:06:29.837 sys 0m0.532s 00:06:29.837 16:10:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.837 16:10:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:30.097 ************************************ 00:06:30.097 END TEST spdk_dd_malloc 00:06:30.097 ************************************ 00:06:30.097 16:10:13 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:30.097 16:10:13 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:30.097 16:10:13 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.097 16:10:13 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.097 16:10:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:30.097 ************************************ 00:06:30.097 START TEST spdk_dd_bdev_to_bdev 00:06:30.097 ************************************ 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:30.097 * Looking for test storage... 00:06:30.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:30.097 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.098 ************************************ 00:06:30.098 START TEST dd_inflate_file 00:06:30.098 ************************************ 00:06:30.098 16:10:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:30.098 [2024-07-12 16:10:13.767362] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:30.098 [2024-07-12 16:10:13.767468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63482 ] 00:06:30.357 [2024-07-12 16:10:13.903892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.357 [2024-07-12 16:10:13.954617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.357 [2024-07-12 16:10:13.980301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.615  Copying: 64/64 [MB] (average 1777 MBps) 00:06:30.616 00:06:30.616 00:06:30.616 real 0m0.455s 00:06:30.616 user 0m0.254s 00:06:30.616 sys 0m0.214s 00:06:30.616 ************************************ 00:06:30.616 END TEST dd_inflate_file 00:06:30.616 ************************************ 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.616 ************************************ 00:06:30.616 START TEST dd_copy_to_out_bdev 00:06:30.616 ************************************ 00:06:30.616 16:10:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:30.616 { 00:06:30.616 "subsystems": [ 00:06:30.616 { 00:06:30.616 "subsystem": "bdev", 00:06:30.616 "config": [ 00:06:30.616 { 00:06:30.616 "params": { 00:06:30.616 "trtype": "pcie", 00:06:30.616 "traddr": "0000:00:10.0", 00:06:30.616 "name": "Nvme0" 00:06:30.616 }, 00:06:30.616 "method": "bdev_nvme_attach_controller" 00:06:30.616 }, 00:06:30.616 { 00:06:30.616 "params": { 00:06:30.616 "trtype": "pcie", 00:06:30.616 "traddr": "0000:00:11.0", 00:06:30.616 "name": "Nvme1" 00:06:30.616 }, 00:06:30.616 "method": "bdev_nvme_attach_controller" 00:06:30.616 }, 00:06:30.616 { 00:06:30.616 "method": "bdev_wait_for_examine" 00:06:30.616 } 00:06:30.616 ] 00:06:30.616 } 00:06:30.616 ] 00:06:30.616 } 00:06:30.616 [2024-07-12 16:10:14.275747] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:30.616 [2024-07-12 16:10:14.275838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:06:30.874 [2024-07-12 16:10:14.411712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.874 [2024-07-12 16:10:14.459759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.874 [2024-07-12 16:10:14.489621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.511  Copying: 53/64 [MB] (53 MBps) Copying: 64/64 [MB] (average 53 MBps) 00:06:32.511 00:06:32.511 00:06:32.511 real 0m1.826s 00:06:32.511 user 0m1.636s 00:06:32.511 sys 0m1.454s 00:06:32.511 ************************************ 00:06:32.511 END TEST dd_copy_to_out_bdev 00:06:32.511 ************************************ 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.511 ************************************ 00:06:32.511 START TEST dd_offset_magic 00:06:32.511 ************************************ 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:32.511 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:32.511 [2024-07-12 16:10:16.157516] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:32.511 [2024-07-12 16:10:16.157606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63555 ] 00:06:32.511 { 00:06:32.511 "subsystems": [ 00:06:32.511 { 00:06:32.511 "subsystem": "bdev", 00:06:32.511 "config": [ 00:06:32.511 { 00:06:32.511 "params": { 00:06:32.511 "trtype": "pcie", 00:06:32.511 "traddr": "0000:00:10.0", 00:06:32.511 "name": "Nvme0" 00:06:32.511 }, 00:06:32.511 "method": "bdev_nvme_attach_controller" 00:06:32.511 }, 00:06:32.511 { 00:06:32.511 "params": { 00:06:32.511 "trtype": "pcie", 00:06:32.511 "traddr": "0000:00:11.0", 00:06:32.511 "name": "Nvme1" 00:06:32.511 }, 00:06:32.511 "method": "bdev_nvme_attach_controller" 00:06:32.511 }, 00:06:32.511 { 00:06:32.511 "method": "bdev_wait_for_examine" 00:06:32.511 } 00:06:32.511 ] 00:06:32.511 } 00:06:32.511 ] 00:06:32.511 } 00:06:32.771 [2024-07-12 16:10:16.294530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.771 [2024-07-12 16:10:16.345693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.771 [2024-07-12 16:10:16.374796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.289  Copying: 65/65 [MB] (average 955 MBps) 00:06:33.289 00:06:33.289 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:33.289 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:33.289 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:33.289 16:10:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:33.289 [2024-07-12 16:10:16.852205] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:33.289 [2024-07-12 16:10:16.852295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:06:33.289 { 00:06:33.289 "subsystems": [ 00:06:33.289 { 00:06:33.289 "subsystem": "bdev", 00:06:33.289 "config": [ 00:06:33.289 { 00:06:33.289 "params": { 00:06:33.289 "trtype": "pcie", 00:06:33.289 "traddr": "0000:00:10.0", 00:06:33.289 "name": "Nvme0" 00:06:33.289 }, 00:06:33.289 "method": "bdev_nvme_attach_controller" 00:06:33.289 }, 00:06:33.289 { 00:06:33.289 "params": { 00:06:33.289 "trtype": "pcie", 00:06:33.289 "traddr": "0000:00:11.0", 00:06:33.289 "name": "Nvme1" 00:06:33.289 }, 00:06:33.289 "method": "bdev_nvme_attach_controller" 00:06:33.289 }, 00:06:33.289 { 00:06:33.289 "method": "bdev_wait_for_examine" 00:06:33.289 } 00:06:33.289 ] 00:06:33.289 } 00:06:33.289 ] 00:06:33.289 } 00:06:33.289 [2024-07-12 16:10:16.986950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.548 [2024-07-12 16:10:17.036856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.548 [2024-07-12 16:10:17.063394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.808  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:33.808 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:33.808 16:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:33.808 [2024-07-12 16:10:17.411148] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:33.808 [2024-07-12 16:10:17.411244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63592 ] 00:06:33.808 { 00:06:33.808 "subsystems": [ 00:06:33.808 { 00:06:33.808 "subsystem": "bdev", 00:06:33.808 "config": [ 00:06:33.808 { 00:06:33.808 "params": { 00:06:33.808 "trtype": "pcie", 00:06:33.808 "traddr": "0000:00:10.0", 00:06:33.808 "name": "Nvme0" 00:06:33.808 }, 00:06:33.808 "method": "bdev_nvme_attach_controller" 00:06:33.808 }, 00:06:33.808 { 00:06:33.808 "params": { 00:06:33.808 "trtype": "pcie", 00:06:33.808 "traddr": "0000:00:11.0", 00:06:33.808 "name": "Nvme1" 00:06:33.808 }, 00:06:33.808 "method": "bdev_nvme_attach_controller" 00:06:33.808 }, 00:06:33.808 { 00:06:33.808 "method": "bdev_wait_for_examine" 00:06:33.808 } 00:06:33.808 ] 00:06:33.808 } 00:06:33.808 ] 00:06:33.808 } 00:06:34.067 [2024-07-12 16:10:17.546713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.067 [2024-07-12 16:10:17.600568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.067 [2024-07-12 16:10:17.627958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.586  Copying: 65/65 [MB] (average 1083 MBps) 00:06:34.586 00:06:34.586 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:34.586 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:34.586 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:34.586 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:34.586 [2024-07-12 16:10:18.111946] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:34.586 [2024-07-12 16:10:18.112039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63606 ] 00:06:34.586 { 00:06:34.586 "subsystems": [ 00:06:34.586 { 00:06:34.586 "subsystem": "bdev", 00:06:34.586 "config": [ 00:06:34.586 { 00:06:34.586 "params": { 00:06:34.586 "trtype": "pcie", 00:06:34.586 "traddr": "0000:00:10.0", 00:06:34.586 "name": "Nvme0" 00:06:34.586 }, 00:06:34.586 "method": "bdev_nvme_attach_controller" 00:06:34.586 }, 00:06:34.586 { 00:06:34.586 "params": { 00:06:34.586 "trtype": "pcie", 00:06:34.586 "traddr": "0000:00:11.0", 00:06:34.586 "name": "Nvme1" 00:06:34.586 }, 00:06:34.586 "method": "bdev_nvme_attach_controller" 00:06:34.586 }, 00:06:34.586 { 00:06:34.586 "method": "bdev_wait_for_examine" 00:06:34.586 } 00:06:34.586 ] 00:06:34.586 } 00:06:34.586 ] 00:06:34.586 } 00:06:34.586 [2024-07-12 16:10:18.248107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.586 [2024-07-12 16:10:18.301055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.845 [2024-07-12 16:10:18.328669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.104  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:35.104 00:06:35.104 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:35.104 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:35.104 00:06:35.104 real 0m2.530s 00:06:35.104 user 0m1.916s 00:06:35.104 sys 0m0.615s 00:06:35.104 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.104 ************************************ 00:06:35.104 END TEST dd_offset_magic 00:06:35.104 ************************************ 00:06:35.104 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:35.104 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:35.105 16:10:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:35.105 [2024-07-12 16:10:18.726938] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:35.105 [2024-07-12 16:10:18.727031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63643 ] 00:06:35.105 { 00:06:35.105 "subsystems": [ 00:06:35.105 { 00:06:35.105 "subsystem": "bdev", 00:06:35.105 "config": [ 00:06:35.105 { 00:06:35.105 "params": { 00:06:35.105 "trtype": "pcie", 00:06:35.105 "traddr": "0000:00:10.0", 00:06:35.105 "name": "Nvme0" 00:06:35.105 }, 00:06:35.105 "method": "bdev_nvme_attach_controller" 00:06:35.105 }, 00:06:35.105 { 00:06:35.105 "params": { 00:06:35.105 "trtype": "pcie", 00:06:35.105 "traddr": "0000:00:11.0", 00:06:35.105 "name": "Nvme1" 00:06:35.105 }, 00:06:35.105 "method": "bdev_nvme_attach_controller" 00:06:35.105 }, 00:06:35.105 { 00:06:35.105 "method": "bdev_wait_for_examine" 00:06:35.105 } 00:06:35.105 ] 00:06:35.105 } 00:06:35.105 ] 00:06:35.105 } 00:06:35.364 [2024-07-12 16:10:18.862979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.364 [2024-07-12 16:10:18.913219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.364 [2024-07-12 16:10:18.940369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.623  Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:35.623 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:35.623 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:35.623 [2024-07-12 16:10:19.299103] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:35.623 [2024-07-12 16:10:19.299197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63659 ] 00:06:35.623 { 00:06:35.623 "subsystems": [ 00:06:35.623 { 00:06:35.623 "subsystem": "bdev", 00:06:35.623 "config": [ 00:06:35.623 { 00:06:35.623 "params": { 00:06:35.623 "trtype": "pcie", 00:06:35.623 "traddr": "0000:00:10.0", 00:06:35.623 "name": "Nvme0" 00:06:35.623 }, 00:06:35.623 "method": "bdev_nvme_attach_controller" 00:06:35.623 }, 00:06:35.623 { 00:06:35.623 "params": { 00:06:35.623 "trtype": "pcie", 00:06:35.623 "traddr": "0000:00:11.0", 00:06:35.623 "name": "Nvme1" 00:06:35.623 }, 00:06:35.623 "method": "bdev_nvme_attach_controller" 00:06:35.623 }, 00:06:35.623 { 00:06:35.623 "method": "bdev_wait_for_examine" 00:06:35.623 } 00:06:35.623 ] 00:06:35.623 } 00:06:35.623 ] 00:06:35.623 } 00:06:35.882 [2024-07-12 16:10:19.435278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.883 [2024-07-12 16:10:19.485336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.883 [2024-07-12 16:10:19.513052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.142  Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:36.142 00:06:36.142 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:36.142 00:06:36.142 real 0m6.235s 00:06:36.142 user 0m4.751s 00:06:36.142 sys 0m2.782s 00:06:36.142 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.142 ************************************ 00:06:36.142 END TEST spdk_dd_bdev_to_bdev 00:06:36.142 ************************************ 00:06:36.142 16:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:36.401 16:10:19 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:36.401 16:10:19 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:36.401 16:10:19 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:36.401 16:10:19 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.401 16:10:19 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.401 16:10:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:36.401 ************************************ 00:06:36.401 START TEST spdk_dd_uring 00:06:36.401 ************************************ 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:36.401 * Looking for test storage... 00:06:36.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:36.401 ************************************ 00:06:36.401 START TEST dd_uring_copy 00:06:36.401 ************************************ 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:36.401 16:10:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:36.401 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:36.401 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=y74x5unpg6bvnnnv489qx436j8c87vkp5j271tq2ihd3out63nc6isq5k8yis2iluaavmrfvhb6820s20tpwk23e97j9nvn8p8nxh53ukww1utth3rgepg6bzo6z74lqw5tq0s0v6q3xfywf15wwxuxvei3g18drq93v6p0ow2o0furjzgpzitg2sy1hvpncpr7hm8ys3c7hn3ayl3rl5sheixc8l1y9fjywckc2bav06stjw90btmz3jdvoly1zyigx1twim313j5q3clit813i8m0dh5qu48iun7h1gu53boov2wg8elr0urecdlfkyvkvmfgdtt6yn121cikc9ti5zg7gps6bhde3ww23v84j7gm2byuh4ix4r752qbn80qitjs7sutt2t11wv5k5b3i8q8e2gfozhqt303jsckagvbhmkflydptd58792opxwk2lkqxpgv7c1on38bw4ah0h63tuzw0vyi0zaz3alyvhcdfxxz7y9o2spo1gl3ffehip53fqjq1014qk29hvtjiliy482388lu42mbo05sk0e9vi08k5cc91d4ypsveebmdtz0qnorleeruaxt55rsulohyy4bz1i9m8wckyhhy6avdztscgppleyige8t6tn89ctcy9ejlo3te4eboyf51ng7c3plke4pvmjumqyfna323muxy1lmr50pdu6hgvvitha01ogz6w4csqjhf0vfjsceh9n7uvklicacl0ols8i9ny9uzue5bx9rn36wgcd522o76oilv05670vhm7e211cgb7syicdogniyg1ijdiwi0chn23vjgzbd6d6h4zb332uhi5vcqh5kpq2fei1hacyg05z0bwvnfh9e0yz0o51egd85l9p7outgw4plla4d5bcouge1s2b7ylyydqioqhsuu51w5jhnbgnmq5qk77d9cu2oi652nueh8i2uu54odzd9fbxcv2n1xw14h9t40599oi2hjxehsbrsldv94oecncbg3ai68yiga74mhn 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo y74x5unpg6bvnnnv489qx436j8c87vkp5j271tq2ihd3out63nc6isq5k8yis2iluaavmrfvhb6820s20tpwk23e97j9nvn8p8nxh53ukww1utth3rgepg6bzo6z74lqw5tq0s0v6q3xfywf15wwxuxvei3g18drq93v6p0ow2o0furjzgpzitg2sy1hvpncpr7hm8ys3c7hn3ayl3rl5sheixc8l1y9fjywckc2bav06stjw90btmz3jdvoly1zyigx1twim313j5q3clit813i8m0dh5qu48iun7h1gu53boov2wg8elr0urecdlfkyvkvmfgdtt6yn121cikc9ti5zg7gps6bhde3ww23v84j7gm2byuh4ix4r752qbn80qitjs7sutt2t11wv5k5b3i8q8e2gfozhqt303jsckagvbhmkflydptd58792opxwk2lkqxpgv7c1on38bw4ah0h63tuzw0vyi0zaz3alyvhcdfxxz7y9o2spo1gl3ffehip53fqjq1014qk29hvtjiliy482388lu42mbo05sk0e9vi08k5cc91d4ypsveebmdtz0qnorleeruaxt55rsulohyy4bz1i9m8wckyhhy6avdztscgppleyige8t6tn89ctcy9ejlo3te4eboyf51ng7c3plke4pvmjumqyfna323muxy1lmr50pdu6hgvvitha01ogz6w4csqjhf0vfjsceh9n7uvklicacl0ols8i9ny9uzue5bx9rn36wgcd522o76oilv05670vhm7e211cgb7syicdogniyg1ijdiwi0chn23vjgzbd6d6h4zb332uhi5vcqh5kpq2fei1hacyg05z0bwvnfh9e0yz0o51egd85l9p7outgw4plla4d5bcouge1s2b7ylyydqioqhsuu51w5jhnbgnmq5qk77d9cu2oi652nueh8i2uu54odzd9fbxcv2n1xw14h9t40599oi2hjxehsbrsldv94oecncbg3ai68yiga74mhn 00:06:36.402 16:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:36.402 [2024-07-12 16:10:20.076757] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:36.402 [2024-07-12 16:10:20.076884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63723 ] 00:06:36.661 [2024-07-12 16:10:20.216330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.661 [2024-07-12 16:10:20.270344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.661 [2024-07-12 16:10:20.296870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.487  Copying: 511/511 [MB] (average 1442 MBps) 00:06:37.487 00:06:37.487 16:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:37.487 16:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:37.487 16:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:37.487 16:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.487 { 00:06:37.487 "subsystems": [ 00:06:37.487 { 00:06:37.487 "subsystem": "bdev", 00:06:37.487 "config": [ 00:06:37.487 { 00:06:37.487 "params": { 00:06:37.487 "block_size": 512, 00:06:37.487 "num_blocks": 1048576, 00:06:37.487 "name": "malloc0" 00:06:37.487 }, 00:06:37.487 "method": "bdev_malloc_create" 00:06:37.487 }, 00:06:37.487 { 00:06:37.487 "params": { 00:06:37.487 "filename": "/dev/zram1", 00:06:37.487 "name": "uring0" 00:06:37.487 }, 00:06:37.487 "method": "bdev_uring_create" 00:06:37.487 }, 00:06:37.487 { 00:06:37.487 "method": "bdev_wait_for_examine" 00:06:37.487 } 00:06:37.487 ] 00:06:37.487 } 00:06:37.487 ] 00:06:37.487 } 00:06:37.487 [2024-07-12 16:10:21.060239] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:37.487 [2024-07-12 16:10:21.060337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63739 ] 00:06:37.487 [2024-07-12 16:10:21.202612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.746 [2024-07-12 16:10:21.260679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.746 [2024-07-12 16:10:21.287692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.252  Copying: 246/512 [MB] (246 MBps) Copying: 487/512 [MB] (241 MBps) Copying: 512/512 [MB] (average 242 MBps) 00:06:40.252 00:06:40.252 16:10:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:40.252 16:10:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:40.252 16:10:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:40.252 16:10:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.252 [2024-07-12 16:10:23.802163] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:40.252 [2024-07-12 16:10:23.802258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63778 ] 00:06:40.252 { 00:06:40.252 "subsystems": [ 00:06:40.252 { 00:06:40.252 "subsystem": "bdev", 00:06:40.252 "config": [ 00:06:40.252 { 00:06:40.252 "params": { 00:06:40.252 "block_size": 512, 00:06:40.252 "num_blocks": 1048576, 00:06:40.252 "name": "malloc0" 00:06:40.252 }, 00:06:40.252 "method": "bdev_malloc_create" 00:06:40.252 }, 00:06:40.252 { 00:06:40.252 "params": { 00:06:40.252 "filename": "/dev/zram1", 00:06:40.252 "name": "uring0" 00:06:40.252 }, 00:06:40.252 "method": "bdev_uring_create" 00:06:40.252 }, 00:06:40.252 { 00:06:40.252 "method": "bdev_wait_for_examine" 00:06:40.252 } 00:06:40.252 ] 00:06:40.252 } 00:06:40.252 ] 00:06:40.252 } 00:06:40.252 [2024-07-12 16:10:23.939936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.510 [2024-07-12 16:10:23.996965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.510 [2024-07-12 16:10:24.025520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.417  Copying: 187/512 [MB] (187 MBps) Copying: 374/512 [MB] (186 MBps) Copying: 512/512 [MB] (average 186 MBps) 00:06:43.417 00:06:43.417 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:43.417 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ y74x5unpg6bvnnnv489qx436j8c87vkp5j271tq2ihd3out63nc6isq5k8yis2iluaavmrfvhb6820s20tpwk23e97j9nvn8p8nxh53ukww1utth3rgepg6bzo6z74lqw5tq0s0v6q3xfywf15wwxuxvei3g18drq93v6p0ow2o0furjzgpzitg2sy1hvpncpr7hm8ys3c7hn3ayl3rl5sheixc8l1y9fjywckc2bav06stjw90btmz3jdvoly1zyigx1twim313j5q3clit813i8m0dh5qu48iun7h1gu53boov2wg8elr0urecdlfkyvkvmfgdtt6yn121cikc9ti5zg7gps6bhde3ww23v84j7gm2byuh4ix4r752qbn80qitjs7sutt2t11wv5k5b3i8q8e2gfozhqt303jsckagvbhmkflydptd58792opxwk2lkqxpgv7c1on38bw4ah0h63tuzw0vyi0zaz3alyvhcdfxxz7y9o2spo1gl3ffehip53fqjq1014qk29hvtjiliy482388lu42mbo05sk0e9vi08k5cc91d4ypsveebmdtz0qnorleeruaxt55rsulohyy4bz1i9m8wckyhhy6avdztscgppleyige8t6tn89ctcy9ejlo3te4eboyf51ng7c3plke4pvmjumqyfna323muxy1lmr50pdu6hgvvitha01ogz6w4csqjhf0vfjsceh9n7uvklicacl0ols8i9ny9uzue5bx9rn36wgcd522o76oilv05670vhm7e211cgb7syicdogniyg1ijdiwi0chn23vjgzbd6d6h4zb332uhi5vcqh5kpq2fei1hacyg05z0bwvnfh9e0yz0o51egd85l9p7outgw4plla4d5bcouge1s2b7ylyydqioqhsuu51w5jhnbgnmq5qk77d9cu2oi652nueh8i2uu54odzd9fbxcv2n1xw14h9t40599oi2hjxehsbrsldv94oecncbg3ai68yiga74mhn == \y\7\4\x\5\u\n\p\g\6\b\v\n\n\n\v\4\8\9\q\x\4\3\6\j\8\c\8\7\v\k\p\5\j\2\7\1\t\q\2\i\h\d\3\o\u\t\6\3\n\c\6\i\s\q\5\k\8\y\i\s\2\i\l\u\a\a\v\m\r\f\v\h\b\6\8\2\0\s\2\0\t\p\w\k\2\3\e\9\7\j\9\n\v\n\8\p\8\n\x\h\5\3\u\k\w\w\1\u\t\t\h\3\r\g\e\p\g\6\b\z\o\6\z\7\4\l\q\w\5\t\q\0\s\0\v\6\q\3\x\f\y\w\f\1\5\w\w\x\u\x\v\e\i\3\g\1\8\d\r\q\9\3\v\6\p\0\o\w\2\o\0\f\u\r\j\z\g\p\z\i\t\g\2\s\y\1\h\v\p\n\c\p\r\7\h\m\8\y\s\3\c\7\h\n\3\a\y\l\3\r\l\5\s\h\e\i\x\c\8\l\1\y\9\f\j\y\w\c\k\c\2\b\a\v\0\6\s\t\j\w\9\0\b\t\m\z\3\j\d\v\o\l\y\1\z\y\i\g\x\1\t\w\i\m\3\1\3\j\5\q\3\c\l\i\t\8\1\3\i\8\m\0\d\h\5\q\u\4\8\i\u\n\7\h\1\g\u\5\3\b\o\o\v\2\w\g\8\e\l\r\0\u\r\e\c\d\l\f\k\y\v\k\v\m\f\g\d\t\t\6\y\n\1\2\1\c\i\k\c\9\t\i\5\z\g\7\g\p\s\6\b\h\d\e\3\w\w\2\3\v\8\4\j\7\g\m\2\b\y\u\h\4\i\x\4\r\7\5\2\q\b\n\8\0\q\i\t\j\s\7\s\u\t\t\2\t\1\1\w\v\5\k\5\b\3\i\8\q\8\e\2\g\f\o\z\h\q\t\3\0\3\j\s\c\k\a\g\v\b\h\m\k\f\l\y\d\p\t\d\5\8\7\9\2\o\p\x\w\k\2\l\k\q\x\p\g\v\7\c\1\o\n\3\8\b\w\4\a\h\0\h\6\3\t\u\z\w\0\v\y\i\0\z\a\z\3\a\l\y\v\h\c\d\f\x\x\z\7\y\9\o\2\s\p\o\1\g\l\3\f\f\e\h\i\p\5\3\f\q\j\q\1\0\1\4\q\k\2\9\h\v\t\j\i\l\i\y\4\8\2\3\8\8\l\u\4\2\m\b\o\0\5\s\k\0\e\9\v\i\0\8\k\5\c\c\9\1\d\4\y\p\s\v\e\e\b\m\d\t\z\0\q\n\o\r\l\e\e\r\u\a\x\t\5\5\r\s\u\l\o\h\y\y\4\b\z\1\i\9\m\8\w\c\k\y\h\h\y\6\a\v\d\z\t\s\c\g\p\p\l\e\y\i\g\e\8\t\6\t\n\8\9\c\t\c\y\9\e\j\l\o\3\t\e\4\e\b\o\y\f\5\1\n\g\7\c\3\p\l\k\e\4\p\v\m\j\u\m\q\y\f\n\a\3\2\3\m\u\x\y\1\l\m\r\5\0\p\d\u\6\h\g\v\v\i\t\h\a\0\1\o\g\z\6\w\4\c\s\q\j\h\f\0\v\f\j\s\c\e\h\9\n\7\u\v\k\l\i\c\a\c\l\0\o\l\s\8\i\9\n\y\9\u\z\u\e\5\b\x\9\r\n\3\6\w\g\c\d\5\2\2\o\7\6\o\i\l\v\0\5\6\7\0\v\h\m\7\e\2\1\1\c\g\b\7\s\y\i\c\d\o\g\n\i\y\g\1\i\j\d\i\w\i\0\c\h\n\2\3\v\j\g\z\b\d\6\d\6\h\4\z\b\3\3\2\u\h\i\5\v\c\q\h\5\k\p\q\2\f\e\i\1\h\a\c\y\g\0\5\z\0\b\w\v\n\f\h\9\e\0\y\z\0\o\5\1\e\g\d\8\5\l\9\p\7\o\u\t\g\w\4\p\l\l\a\4\d\5\b\c\o\u\g\e\1\s\2\b\7\y\l\y\y\d\q\i\o\q\h\s\u\u\5\1\w\5\j\h\n\b\g\n\m\q\5\q\k\7\7\d\9\c\u\2\o\i\6\5\2\n\u\e\h\8\i\2\u\u\5\4\o\d\z\d\9\f\b\x\c\v\2\n\1\x\w\1\4\h\9\t\4\0\5\9\9\o\i\2\h\j\x\e\h\s\b\r\s\l\d\v\9\4\o\e\c\n\c\b\g\3\a\i\6\8\y\i\g\a\7\4\m\h\n ]] 00:06:43.417 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:43.417 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ y74x5unpg6bvnnnv489qx436j8c87vkp5j271tq2ihd3out63nc6isq5k8yis2iluaavmrfvhb6820s20tpwk23e97j9nvn8p8nxh53ukww1utth3rgepg6bzo6z74lqw5tq0s0v6q3xfywf15wwxuxvei3g18drq93v6p0ow2o0furjzgpzitg2sy1hvpncpr7hm8ys3c7hn3ayl3rl5sheixc8l1y9fjywckc2bav06stjw90btmz3jdvoly1zyigx1twim313j5q3clit813i8m0dh5qu48iun7h1gu53boov2wg8elr0urecdlfkyvkvmfgdtt6yn121cikc9ti5zg7gps6bhde3ww23v84j7gm2byuh4ix4r752qbn80qitjs7sutt2t11wv5k5b3i8q8e2gfozhqt303jsckagvbhmkflydptd58792opxwk2lkqxpgv7c1on38bw4ah0h63tuzw0vyi0zaz3alyvhcdfxxz7y9o2spo1gl3ffehip53fqjq1014qk29hvtjiliy482388lu42mbo05sk0e9vi08k5cc91d4ypsveebmdtz0qnorleeruaxt55rsulohyy4bz1i9m8wckyhhy6avdztscgppleyige8t6tn89ctcy9ejlo3te4eboyf51ng7c3plke4pvmjumqyfna323muxy1lmr50pdu6hgvvitha01ogz6w4csqjhf0vfjsceh9n7uvklicacl0ols8i9ny9uzue5bx9rn36wgcd522o76oilv05670vhm7e211cgb7syicdogniyg1ijdiwi0chn23vjgzbd6d6h4zb332uhi5vcqh5kpq2fei1hacyg05z0bwvnfh9e0yz0o51egd85l9p7outgw4plla4d5bcouge1s2b7ylyydqioqhsuu51w5jhnbgnmq5qk77d9cu2oi652nueh8i2uu54odzd9fbxcv2n1xw14h9t40599oi2hjxehsbrsldv94oecncbg3ai68yiga74mhn == \y\7\4\x\5\u\n\p\g\6\b\v\n\n\n\v\4\8\9\q\x\4\3\6\j\8\c\8\7\v\k\p\5\j\2\7\1\t\q\2\i\h\d\3\o\u\t\6\3\n\c\6\i\s\q\5\k\8\y\i\s\2\i\l\u\a\a\v\m\r\f\v\h\b\6\8\2\0\s\2\0\t\p\w\k\2\3\e\9\7\j\9\n\v\n\8\p\8\n\x\h\5\3\u\k\w\w\1\u\t\t\h\3\r\g\e\p\g\6\b\z\o\6\z\7\4\l\q\w\5\t\q\0\s\0\v\6\q\3\x\f\y\w\f\1\5\w\w\x\u\x\v\e\i\3\g\1\8\d\r\q\9\3\v\6\p\0\o\w\2\o\0\f\u\r\j\z\g\p\z\i\t\g\2\s\y\1\h\v\p\n\c\p\r\7\h\m\8\y\s\3\c\7\h\n\3\a\y\l\3\r\l\5\s\h\e\i\x\c\8\l\1\y\9\f\j\y\w\c\k\c\2\b\a\v\0\6\s\t\j\w\9\0\b\t\m\z\3\j\d\v\o\l\y\1\z\y\i\g\x\1\t\w\i\m\3\1\3\j\5\q\3\c\l\i\t\8\1\3\i\8\m\0\d\h\5\q\u\4\8\i\u\n\7\h\1\g\u\5\3\b\o\o\v\2\w\g\8\e\l\r\0\u\r\e\c\d\l\f\k\y\v\k\v\m\f\g\d\t\t\6\y\n\1\2\1\c\i\k\c\9\t\i\5\z\g\7\g\p\s\6\b\h\d\e\3\w\w\2\3\v\8\4\j\7\g\m\2\b\y\u\h\4\i\x\4\r\7\5\2\q\b\n\8\0\q\i\t\j\s\7\s\u\t\t\2\t\1\1\w\v\5\k\5\b\3\i\8\q\8\e\2\g\f\o\z\h\q\t\3\0\3\j\s\c\k\a\g\v\b\h\m\k\f\l\y\d\p\t\d\5\8\7\9\2\o\p\x\w\k\2\l\k\q\x\p\g\v\7\c\1\o\n\3\8\b\w\4\a\h\0\h\6\3\t\u\z\w\0\v\y\i\0\z\a\z\3\a\l\y\v\h\c\d\f\x\x\z\7\y\9\o\2\s\p\o\1\g\l\3\f\f\e\h\i\p\5\3\f\q\j\q\1\0\1\4\q\k\2\9\h\v\t\j\i\l\i\y\4\8\2\3\8\8\l\u\4\2\m\b\o\0\5\s\k\0\e\9\v\i\0\8\k\5\c\c\9\1\d\4\y\p\s\v\e\e\b\m\d\t\z\0\q\n\o\r\l\e\e\r\u\a\x\t\5\5\r\s\u\l\o\h\y\y\4\b\z\1\i\9\m\8\w\c\k\y\h\h\y\6\a\v\d\z\t\s\c\g\p\p\l\e\y\i\g\e\8\t\6\t\n\8\9\c\t\c\y\9\e\j\l\o\3\t\e\4\e\b\o\y\f\5\1\n\g\7\c\3\p\l\k\e\4\p\v\m\j\u\m\q\y\f\n\a\3\2\3\m\u\x\y\1\l\m\r\5\0\p\d\u\6\h\g\v\v\i\t\h\a\0\1\o\g\z\6\w\4\c\s\q\j\h\f\0\v\f\j\s\c\e\h\9\n\7\u\v\k\l\i\c\a\c\l\0\o\l\s\8\i\9\n\y\9\u\z\u\e\5\b\x\9\r\n\3\6\w\g\c\d\5\2\2\o\7\6\o\i\l\v\0\5\6\7\0\v\h\m\7\e\2\1\1\c\g\b\7\s\y\i\c\d\o\g\n\i\y\g\1\i\j\d\i\w\i\0\c\h\n\2\3\v\j\g\z\b\d\6\d\6\h\4\z\b\3\3\2\u\h\i\5\v\c\q\h\5\k\p\q\2\f\e\i\1\h\a\c\y\g\0\5\z\0\b\w\v\n\f\h\9\e\0\y\z\0\o\5\1\e\g\d\8\5\l\9\p\7\o\u\t\g\w\4\p\l\l\a\4\d\5\b\c\o\u\g\e\1\s\2\b\7\y\l\y\y\d\q\i\o\q\h\s\u\u\5\1\w\5\j\h\n\b\g\n\m\q\5\q\k\7\7\d\9\c\u\2\o\i\6\5\2\n\u\e\h\8\i\2\u\u\5\4\o\d\z\d\9\f\b\x\c\v\2\n\1\x\w\1\4\h\9\t\4\0\5\9\9\o\i\2\h\j\x\e\h\s\b\r\s\l\d\v\9\4\o\e\c\n\c\b\g\3\a\i\6\8\y\i\g\a\7\4\m\h\n ]] 00:06:43.417 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:43.984 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:43.984 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:43.984 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:43.984 16:10:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.984 [2024-07-12 16:10:27.553424] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:43.984 [2024-07-12 16:10:27.553506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63846 ] 00:06:43.984 { 00:06:43.984 "subsystems": [ 00:06:43.984 { 00:06:43.984 "subsystem": "bdev", 00:06:43.984 "config": [ 00:06:43.984 { 00:06:43.984 "params": { 00:06:43.984 "block_size": 512, 00:06:43.984 "num_blocks": 1048576, 00:06:43.984 "name": "malloc0" 00:06:43.984 }, 00:06:43.985 "method": "bdev_malloc_create" 00:06:43.985 }, 00:06:43.985 { 00:06:43.985 "params": { 00:06:43.985 "filename": "/dev/zram1", 00:06:43.985 "name": "uring0" 00:06:43.985 }, 00:06:43.985 "method": "bdev_uring_create" 00:06:43.985 }, 00:06:43.985 { 00:06:43.985 "method": "bdev_wait_for_examine" 00:06:43.985 } 00:06:43.985 ] 00:06:43.985 } 00:06:43.985 ] 00:06:43.985 } 00:06:43.985 [2024-07-12 16:10:27.683219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.244 [2024-07-12 16:10:27.732518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.244 [2024-07-12 16:10:27.760884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.384  Copying: 172/512 [MB] (172 MBps) Copying: 348/512 [MB] (176 MBps) Copying: 512/512 [MB] (average 174 MBps) 00:06:47.384 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.384 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.384 [2024-07-12 16:10:31.098477] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:47.384 [2024-07-12 16:10:31.098579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63891 ] 00:06:47.643 { 00:06:47.643 "subsystems": [ 00:06:47.643 { 00:06:47.643 "subsystem": "bdev", 00:06:47.643 "config": [ 00:06:47.643 { 00:06:47.643 "params": { 00:06:47.643 "block_size": 512, 00:06:47.643 "num_blocks": 1048576, 00:06:47.643 "name": "malloc0" 00:06:47.643 }, 00:06:47.643 "method": "bdev_malloc_create" 00:06:47.643 }, 00:06:47.643 { 00:06:47.643 "params": { 00:06:47.643 "filename": "/dev/zram1", 00:06:47.643 "name": "uring0" 00:06:47.643 }, 00:06:47.643 "method": "bdev_uring_create" 00:06:47.643 }, 00:06:47.643 { 00:06:47.643 "params": { 00:06:47.643 "name": "uring0" 00:06:47.643 }, 00:06:47.643 "method": "bdev_uring_delete" 00:06:47.643 }, 00:06:47.643 { 00:06:47.643 "method": "bdev_wait_for_examine" 00:06:47.643 } 00:06:47.643 ] 00:06:47.643 } 00:06:47.643 ] 00:06:47.643 } 00:06:47.643 [2024-07-12 16:10:31.234724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.643 [2024-07-12 16:10:31.288493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.643 [2024-07-12 16:10:31.318058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.161  Copying: 0/0 [B] (average 0 Bps) 00:06:48.161 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.161 16:10:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:48.161 [2024-07-12 16:10:31.730995] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:48.161 [2024-07-12 16:10:31.731081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63920 ] 00:06:48.161 { 00:06:48.161 "subsystems": [ 00:06:48.161 { 00:06:48.161 "subsystem": "bdev", 00:06:48.161 "config": [ 00:06:48.161 { 00:06:48.161 "params": { 00:06:48.161 "block_size": 512, 00:06:48.161 "num_blocks": 1048576, 00:06:48.161 "name": "malloc0" 00:06:48.161 }, 00:06:48.161 "method": "bdev_malloc_create" 00:06:48.161 }, 00:06:48.161 { 00:06:48.161 "params": { 00:06:48.161 "filename": "/dev/zram1", 00:06:48.161 "name": "uring0" 00:06:48.161 }, 00:06:48.161 "method": "bdev_uring_create" 00:06:48.161 }, 00:06:48.161 { 00:06:48.161 "params": { 00:06:48.161 "name": "uring0" 00:06:48.161 }, 00:06:48.161 "method": "bdev_uring_delete" 00:06:48.161 }, 00:06:48.161 { 00:06:48.161 "method": "bdev_wait_for_examine" 00:06:48.161 } 00:06:48.161 ] 00:06:48.161 } 00:06:48.161 ] 00:06:48.161 } 00:06:48.161 [2024-07-12 16:10:31.870759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.420 [2024-07-12 16:10:31.940429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.420 [2024-07-12 16:10:31.975435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.420 [2024-07-12 16:10:32.110145] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:48.420 [2024-07-12 16:10:32.110192] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:48.420 [2024-07-12 16:10:32.110203] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:48.420 [2024-07-12 16:10:32.110214] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.679 [2024-07-12 16:10:32.265366] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:06:48.679 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:48.938 ************************************ 00:06:48.938 END TEST dd_uring_copy 00:06:48.938 ************************************ 00:06:48.938 00:06:48.938 real 0m12.587s 00:06:48.938 user 0m8.532s 00:06:48.938 sys 0m10.661s 00:06:48.938 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.938 16:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.938 16:10:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:06:48.938 ************************************ 00:06:48.938 END TEST spdk_dd_uring 00:06:48.938 ************************************ 00:06:48.938 00:06:48.938 real 0m12.723s 00:06:48.938 user 0m8.579s 00:06:48.938 sys 0m10.747s 00:06:48.938 16:10:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.938 16:10:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:49.249 16:10:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:49.249 16:10:32 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:49.249 16:10:32 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.249 16:10:32 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.249 16:10:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.249 ************************************ 00:06:49.249 START TEST spdk_dd_sparse 00:06:49.249 ************************************ 00:06:49.249 16:10:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:49.249 * Looking for test storage... 00:06:49.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.249 16:10:32 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.249 16:10:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.249 16:10:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.249 16:10:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:49.250 1+0 records in 00:06:49.250 1+0 records out 00:06:49.250 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00579163 s, 724 MB/s 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:49.250 1+0 records in 00:06:49.250 1+0 records out 00:06:49.250 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00556916 s, 753 MB/s 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:49.250 1+0 records in 00:06:49.250 1+0 records out 00:06:49.250 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00526799 s, 796 MB/s 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 ************************************ 00:06:49.250 START TEST dd_sparse_file_to_file 00:06:49.250 ************************************ 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:49.250 16:10:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 [2024-07-12 16:10:32.865588] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:49.250 [2024-07-12 16:10:32.865854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64012 ] 00:06:49.250 { 00:06:49.250 "subsystems": [ 00:06:49.250 { 00:06:49.250 "subsystem": "bdev", 00:06:49.250 "config": [ 00:06:49.250 { 00:06:49.250 "params": { 00:06:49.250 "block_size": 4096, 00:06:49.250 "filename": "dd_sparse_aio_disk", 00:06:49.250 "name": "dd_aio" 00:06:49.250 }, 00:06:49.250 "method": "bdev_aio_create" 00:06:49.250 }, 00:06:49.250 { 00:06:49.250 "params": { 00:06:49.250 "lvs_name": "dd_lvstore", 00:06:49.250 "bdev_name": "dd_aio" 00:06:49.250 }, 00:06:49.250 "method": "bdev_lvol_create_lvstore" 00:06:49.250 }, 00:06:49.250 { 00:06:49.250 "method": "bdev_wait_for_examine" 00:06:49.250 } 00:06:49.250 ] 00:06:49.250 } 00:06:49.250 ] 00:06:49.250 } 00:06:49.509 [2024-07-12 16:10:33.003864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.509 [2024-07-12 16:10:33.051899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.509 [2024-07-12 16:10:33.078553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.767  Copying: 12/36 [MB] (average 1000 MBps) 00:06:49.767 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:49.767 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:49.767 ************************************ 00:06:49.767 END TEST dd_sparse_file_to_file 00:06:49.767 ************************************ 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:49.768 00:06:49.768 real 0m0.517s 00:06:49.768 user 0m0.322s 00:06:49.768 sys 0m0.219s 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:49.768 ************************************ 00:06:49.768 START TEST dd_sparse_file_to_bdev 00:06:49.768 ************************************ 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:49.768 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:49.768 [2024-07-12 16:10:33.426342] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:49.768 [2024-07-12 16:10:33.426437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64049 ] 00:06:49.768 { 00:06:49.768 "subsystems": [ 00:06:49.768 { 00:06:49.768 "subsystem": "bdev", 00:06:49.768 "config": [ 00:06:49.768 { 00:06:49.768 "params": { 00:06:49.768 "block_size": 4096, 00:06:49.768 "filename": "dd_sparse_aio_disk", 00:06:49.768 "name": "dd_aio" 00:06:49.768 }, 00:06:49.768 "method": "bdev_aio_create" 00:06:49.768 }, 00:06:49.768 { 00:06:49.768 "params": { 00:06:49.768 "lvs_name": "dd_lvstore", 00:06:49.768 "lvol_name": "dd_lvol", 00:06:49.768 "size_in_mib": 36, 00:06:49.768 "thin_provision": true 00:06:49.768 }, 00:06:49.768 "method": "bdev_lvol_create" 00:06:49.768 }, 00:06:49.768 { 00:06:49.768 "method": "bdev_wait_for_examine" 00:06:49.768 } 00:06:49.768 ] 00:06:49.768 } 00:06:49.768 ] 00:06:49.768 } 00:06:50.027 [2024-07-12 16:10:33.562335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.027 [2024-07-12 16:10:33.611154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.027 [2024-07-12 16:10:33.637400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.286  Copying: 12/36 [MB] (average 521 MBps) 00:06:50.286 00:06:50.286 00:06:50.286 real 0m0.495s 00:06:50.286 user 0m0.325s 00:06:50.286 sys 0m0.229s 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.286 ************************************ 00:06:50.286 END TEST dd_sparse_file_to_bdev 00:06:50.286 ************************************ 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:50.286 ************************************ 00:06:50.286 START TEST dd_sparse_bdev_to_file 00:06:50.286 ************************************ 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:50.286 16:10:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:50.286 [2024-07-12 16:10:33.973029] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:50.286 [2024-07-12 16:10:33.973142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64087 ] 00:06:50.286 { 00:06:50.286 "subsystems": [ 00:06:50.286 { 00:06:50.286 "subsystem": "bdev", 00:06:50.286 "config": [ 00:06:50.286 { 00:06:50.286 "params": { 00:06:50.286 "block_size": 4096, 00:06:50.286 "filename": "dd_sparse_aio_disk", 00:06:50.286 "name": "dd_aio" 00:06:50.286 }, 00:06:50.286 "method": "bdev_aio_create" 00:06:50.286 }, 00:06:50.286 { 00:06:50.286 "method": "bdev_wait_for_examine" 00:06:50.286 } 00:06:50.286 ] 00:06:50.286 } 00:06:50.286 ] 00:06:50.286 } 00:06:50.545 [2024-07-12 16:10:34.109743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.545 [2024-07-12 16:10:34.178113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.545 [2024-07-12 16:10:34.210273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.803  Copying: 12/36 [MB] (average 1090 MBps) 00:06:50.803 00:06:50.803 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:50.803 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:50.804 00:06:50.804 real 0m0.521s 00:06:50.804 user 0m0.334s 00:06:50.804 sys 0m0.232s 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.804 ************************************ 00:06:50.804 END TEST dd_sparse_bdev_to_file 00:06:50.804 ************************************ 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:50.804 00:06:50.804 real 0m1.833s 00:06:50.804 user 0m1.075s 00:06:50.804 sys 0m0.871s 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.804 16:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:50.804 ************************************ 00:06:50.804 END TEST spdk_dd_sparse 00:06:50.804 ************************************ 00:06:51.064 16:10:34 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:51.064 16:10:34 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:51.064 16:10:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.064 16:10:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.064 16:10:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:51.064 ************************************ 00:06:51.064 START TEST spdk_dd_negative 00:06:51.064 ************************************ 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:51.064 * Looking for test storage... 00:06:51.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.064 ************************************ 00:06:51.064 START TEST dd_invalid_arguments 00:06:51.064 ************************************ 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.064 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:51.064 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:51.064 00:06:51.064 CPU options: 00:06:51.064 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:51.064 (like [0,1,10]) 00:06:51.064 --lcores lcore to CPU mapping list. The list is in the format: 00:06:51.064 [<,lcores[@CPUs]>...] 00:06:51.064 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:51.064 Within the group, '-' is used for range separator, 00:06:51.064 ',' is used for single number separator. 00:06:51.064 '( )' can be omitted for single element group, 00:06:51.065 '@' can be omitted if cpus and lcores have the same value 00:06:51.065 --disable-cpumask-locks Disable CPU core lock files. 00:06:51.065 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:51.065 pollers in the app support interrupt mode) 00:06:51.065 -p, --main-core main (primary) core for DPDK 00:06:51.065 00:06:51.065 Configuration options: 00:06:51.065 -c, --config, --json JSON config file 00:06:51.065 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:51.065 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:51.065 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:51.065 --rpcs-allowed comma-separated list of permitted RPCS 00:06:51.065 --json-ignore-init-errors don't exit on invalid config entry 00:06:51.065 00:06:51.065 Memory options: 00:06:51.065 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:51.065 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:51.065 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:51.065 -R, --huge-unlink unlink huge files after initialization 00:06:51.065 -n, --mem-channels number of memory channels used for DPDK 00:06:51.065 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:51.065 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:51.065 --no-huge run without using hugepages 00:06:51.065 -i, --shm-id shared memory ID (optional) 00:06:51.065 -g, --single-file-segments force creating just one hugetlbfs file 00:06:51.065 00:06:51.065 PCI options: 00:06:51.065 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:51.065 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:51.065 -u, --no-pci disable PCI access 00:06:51.065 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:51.065 00:06:51.065 Log options: 00:06:51.065 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:51.065 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:51.065 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:51.065 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:51.065 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:06:51.065 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:06:51.065 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:06:51.065 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:06:51.065 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:06:51.065 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:06:51.065 virtio_vfio_user, vmd) 00:06:51.065 --silence-noticelog disable notice level logging to stderr 00:06:51.065 00:06:51.065 Trace options: 00:06:51.065 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:51.065 setting 0 to disable trace (default 32768) 00:06:51.065 Tracepoints vary in size and can use more than one trace entry. 00:06:51.065 -e, --tpoint-group [:] 00:06:51.065 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:51.065 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:51.065 [2024-07-12 16:10:34.704167] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:51.065 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:06:51.065 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:51.065 a tracepoint group. First tpoint inside a group can be enabled by 00:06:51.065 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:51.065 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:51.065 in /include/spdk_internal/trace_defs.h 00:06:51.065 00:06:51.065 Other options: 00:06:51.065 -h, --help show this usage 00:06:51.065 -v, --version print SPDK version 00:06:51.065 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:51.065 --env-context Opaque context for use of the env implementation 00:06:51.065 00:06:51.065 Application specific: 00:06:51.065 [--------- DD Options ---------] 00:06:51.065 --if Input file. Must specify either --if or --ib. 00:06:51.065 --ib Input bdev. Must specifier either --if or --ib 00:06:51.065 --of Output file. Must specify either --of or --ob. 00:06:51.065 --ob Output bdev. Must specify either --of or --ob. 00:06:51.065 --iflag Input file flags. 00:06:51.065 --oflag Output file flags. 00:06:51.065 --bs I/O unit size (default: 4096) 00:06:51.065 --qd Queue depth (default: 2) 00:06:51.065 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:51.065 --skip Skip this many I/O units at start of input. (default: 0) 00:06:51.065 --seek Skip this many I/O units at start of output. (default: 0) 00:06:51.065 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:51.065 --sparse Enable hole skipping in input target 00:06:51.065 Available iflag and oflag values: 00:06:51.065 append - append mode 00:06:51.065 direct - use direct I/O for data 00:06:51.065 directory - fail unless a directory 00:06:51.065 dsync - use synchronized I/O for data 00:06:51.065 noatime - do not update access time 00:06:51.065 noctty - do not assign controlling terminal from file 00:06:51.065 nofollow - do not follow symlinks 00:06:51.065 nonblock - use non-blocking I/O 00:06:51.065 sync - use synchronized I/O for data and metadata 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.065 00:06:51.065 real 0m0.071s 00:06:51.065 user 0m0.044s 00:06:51.065 sys 0m0.027s 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:51.065 ************************************ 00:06:51.065 END TEST dd_invalid_arguments 00:06:51.065 ************************************ 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.065 ************************************ 00:06:51.065 START TEST dd_double_input 00:06:51.065 ************************************ 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.065 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:51.335 [2024-07-12 16:10:34.827311] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.335 00:06:51.335 real 0m0.074s 00:06:51.335 user 0m0.044s 00:06:51.335 sys 0m0.029s 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.335 ************************************ 00:06:51.335 END TEST dd_double_input 00:06:51.335 ************************************ 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.335 ************************************ 00:06:51.335 START TEST dd_double_output 00:06:51.335 ************************************ 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:06:51.335 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:51.336 [2024-07-12 16:10:34.951201] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.336 00:06:51.336 real 0m0.076s 00:06:51.336 user 0m0.046s 00:06:51.336 sys 0m0.029s 00:06:51.336 ************************************ 00:06:51.336 END TEST dd_double_output 00:06:51.336 ************************************ 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.336 16:10:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.336 ************************************ 00:06:51.336 START TEST dd_no_input 00:06:51.336 ************************************ 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.336 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:51.595 [2024-07-12 16:10:35.078694] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.595 00:06:51.595 real 0m0.072s 00:06:51.595 user 0m0.048s 00:06:51.595 sys 0m0.024s 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 END TEST dd_no_input 00:06:51.595 ************************************ 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 START TEST dd_no_output 00:06:51.595 ************************************ 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.595 [2024-07-12 16:10:35.202952] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.595 00:06:51.595 real 0m0.075s 00:06:51.595 user 0m0.064s 00:06:51.595 sys 0m0.010s 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 END TEST dd_no_output 00:06:51.595 ************************************ 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 START TEST dd_wrong_blocksize 00:06:51.595 ************************************ 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.595 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:51.854 [2024-07-12 16:10:35.334226] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.854 00:06:51.854 real 0m0.074s 00:06:51.854 user 0m0.046s 00:06:51.854 sys 0m0.028s 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:51.854 ************************************ 00:06:51.854 END TEST dd_wrong_blocksize 00:06:51.854 ************************************ 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.854 ************************************ 00:06:51.854 START TEST dd_smaller_blocksize 00:06:51.854 ************************************ 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:06:51.854 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.855 16:10:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:51.855 [2024-07-12 16:10:35.464745] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:51.855 [2024-07-12 16:10:35.464856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64300 ] 00:06:52.113 [2024-07-12 16:10:35.605985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.113 [2024-07-12 16:10:35.680763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.113 [2024-07-12 16:10:35.715921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.378 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:52.378 [2024-07-12 16:10:36.018611] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:52.378 [2024-07-12 16:10:36.018708] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.378 [2024-07-12 16:10:36.089852] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.636 00:06:52.636 real 0m0.775s 00:06:52.636 user 0m0.347s 00:06:52.636 sys 0m0.322s 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.636 ************************************ 00:06:52.636 END TEST dd_smaller_blocksize 00:06:52.636 ************************************ 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.636 ************************************ 00:06:52.636 START TEST dd_invalid_count 00:06:52.636 ************************************ 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:52.636 [2024-07-12 16:10:36.284459] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.636 00:06:52.636 real 0m0.068s 00:06:52.636 user 0m0.044s 00:06:52.636 sys 0m0.023s 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:52.636 ************************************ 00:06:52.636 END TEST dd_invalid_count 00:06:52.636 ************************************ 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.636 ************************************ 00:06:52.636 START TEST dd_invalid_oflag 00:06:52.636 ************************************ 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.636 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.637 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.637 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.637 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.637 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.637 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.637 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:52.896 [2024-07-12 16:10:36.410233] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.896 00:06:52.896 real 0m0.073s 00:06:52.896 user 0m0.045s 00:06:52.896 sys 0m0.027s 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:52.896 ************************************ 00:06:52.896 END TEST dd_invalid_oflag 00:06:52.896 ************************************ 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.896 ************************************ 00:06:52.896 START TEST dd_invalid_iflag 00:06:52.896 ************************************ 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:52.896 [2024-07-12 16:10:36.540018] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.896 00:06:52.896 real 0m0.077s 00:06:52.896 user 0m0.043s 00:06:52.896 sys 0m0.033s 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:52.896 ************************************ 00:06:52.896 END TEST dd_invalid_iflag 00:06:52.896 ************************************ 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.896 ************************************ 00:06:52.896 START TEST dd_unknown_flag 00:06:52.896 ************************************ 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.896 16:10:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:53.156 [2024-07-12 16:10:36.654761] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:53.156 [2024-07-12 16:10:36.654838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64402 ] 00:06:53.156 [2024-07-12 16:10:36.786465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.156 [2024-07-12 16:10:36.849143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.156 [2024-07-12 16:10:36.881987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.415 [2024-07-12 16:10:36.901125] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:53.415 [2024-07-12 16:10:36.901208] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.415 [2024-07-12 16:10:36.901277] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:53.415 [2024-07-12 16:10:36.901290] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.415 [2024-07-12 16:10:36.901578] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:53.415 [2024-07-12 16:10:36.901632] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.415 [2024-07-12 16:10:36.901680] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:53.415 [2024-07-12 16:10:36.901690] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:53.415 [2024-07-12 16:10:36.970603] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.415 00:06:53.415 real 0m0.459s 00:06:53.415 user 0m0.248s 00:06:53.415 sys 0m0.116s 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 ************************************ 00:06:53.415 END TEST dd_unknown_flag 00:06:53.415 ************************************ 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 ************************************ 00:06:53.415 START TEST dd_invalid_json 00:06:53.415 ************************************ 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.415 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:53.674 [2024-07-12 16:10:37.168728] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:53.674 [2024-07-12 16:10:37.168836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64426 ] 00:06:53.674 [2024-07-12 16:10:37.307080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.674 [2024-07-12 16:10:37.363395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.674 [2024-07-12 16:10:37.363479] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:53.674 [2024-07-12 16:10:37.363496] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:53.674 [2024-07-12 16:10:37.363506] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.674 [2024-07-12 16:10:37.363541] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.933 00:06:53.933 real 0m0.340s 00:06:53.933 user 0m0.172s 00:06:53.933 sys 0m0.065s 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.933 ************************************ 00:06:53.933 END TEST dd_invalid_json 00:06:53.933 ************************************ 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:06:53.933 ************************************ 00:06:53.933 END TEST spdk_dd_negative 00:06:53.933 ************************************ 00:06:53.933 00:06:53.933 real 0m2.945s 00:06:53.933 user 0m1.428s 00:06:53.933 sys 0m1.155s 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.933 16:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.933 16:10:37 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:53.933 00:06:53.933 real 1m2.249s 00:06:53.933 user 0m40.353s 00:06:53.933 sys 0m25.105s 00:06:53.933 16:10:37 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.933 16:10:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:53.934 ************************************ 00:06:53.934 END TEST spdk_dd 00:06:53.934 ************************************ 00:06:53.934 16:10:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.934 16:10:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:53.934 16:10:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.934 16:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:53.934 16:10:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:53.934 16:10:37 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:53.934 16:10:37 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.934 16:10:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.934 16:10:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.934 16:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:53.934 ************************************ 00:06:53.934 START TEST nvmf_tcp 00:06:53.934 ************************************ 00:06:53.934 16:10:37 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:54.193 * Looking for test storage... 00:06:54.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.193 16:10:37 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.194 16:10:37 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.194 16:10:37 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.194 16:10:37 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.194 16:10:37 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:54.194 16:10:37 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:54.194 16:10:37 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.194 16:10:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:06:54.194 16:10:37 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:54.194 16:10:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.194 16:10:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.194 16:10:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 ************************************ 00:06:54.194 START TEST nvmf_host_management 00:06:54.194 ************************************ 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:54.194 * Looking for test storage... 00:06:54.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:54.194 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:54.195 Cannot find device "nvmf_init_br" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:54.195 Cannot find device "nvmf_tgt_br" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:54.195 Cannot find device "nvmf_tgt_br2" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:54.195 Cannot find device "nvmf_init_br" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:54.195 Cannot find device "nvmf_tgt_br" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:54.195 Cannot find device "nvmf_tgt_br2" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:54.195 Cannot find device "nvmf_br" 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:06:54.195 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:54.454 Cannot find device "nvmf_init_if" 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:54.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:54.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:54.454 16:10:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:54.454 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:54.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:06:54.713 00:06:54.713 --- 10.0.0.2 ping statistics --- 00:06:54.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.713 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:54.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:54.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:06:54.713 00:06:54.713 --- 10.0.0.3 ping statistics --- 00:06:54.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.713 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:54.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:06:54.713 00:06:54.713 --- 10.0.0.1 ping statistics --- 00:06:54.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.713 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64682 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64682 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64682 ']' 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.713 16:10:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.713 [2024-07-12 16:10:38.312923] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:54.713 [2024-07-12 16:10:38.313030] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.972 [2024-07-12 16:10:38.455389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.972 [2024-07-12 16:10:38.527304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.972 [2024-07-12 16:10:38.527364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.972 [2024-07-12 16:10:38.527385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.972 [2024-07-12 16:10:38.527395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.972 [2024-07-12 16:10:38.527404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.972 [2024-07-12 16:10:38.527566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.972 [2024-07-12 16:10:38.528104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.972 [2024-07-12 16:10:38.528242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.972 [2024-07-12 16:10:38.528251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.972 [2024-07-12 16:10:38.561469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 [2024-07-12 16:10:39.348141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 Malloc0 00:06:55.907 [2024-07-12 16:10:39.411664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64736 00:06:55.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64736 /var/tmp/bdevperf.sock 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64736 ']' 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:55.907 { 00:06:55.907 "params": { 00:06:55.907 "name": "Nvme$subsystem", 00:06:55.907 "trtype": "$TEST_TRANSPORT", 00:06:55.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.907 "adrfam": "ipv4", 00:06:55.907 "trsvcid": "$NVMF_PORT", 00:06:55.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.907 "hdgst": ${hdgst:-false}, 00:06:55.907 "ddgst": ${ddgst:-false} 00:06:55.907 }, 00:06:55.907 "method": "bdev_nvme_attach_controller" 00:06:55.907 } 00:06:55.907 EOF 00:06:55.907 )") 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:55.907 16:10:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:55.907 "params": { 00:06:55.907 "name": "Nvme0", 00:06:55.907 "trtype": "tcp", 00:06:55.907 "traddr": "10.0.0.2", 00:06:55.907 "adrfam": "ipv4", 00:06:55.907 "trsvcid": "4420", 00:06:55.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:55.907 "hdgst": false, 00:06:55.907 "ddgst": false 00:06:55.907 }, 00:06:55.907 "method": "bdev_nvme_attach_controller" 00:06:55.907 }' 00:06:55.907 [2024-07-12 16:10:39.516427] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:55.907 [2024-07-12 16:10:39.516524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64736 ] 00:06:56.165 [2024-07-12 16:10:39.658940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.165 [2024-07-12 16:10:39.729754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.165 [2024-07-12 16:10:39.772782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.165 Running I/O for 10 seconds... 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.103 16:10:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:57.103 [2024-07-12 16:10:40.596168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.103 [2024-07-12 16:10:40.596862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.103 [2024-07-12 16:10:40.596873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.596883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.596895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.596905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.596917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.596940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.596954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.596964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.596976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.596986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.596997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.104 [2024-07-12 16:10:40.597683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ad0 is same with the state(5) to be set 00:06:57.104 [2024-07-12 16:10:40.597742] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1781ad0 was disconnected and freed. reset controller. 00:06:57.104 [2024-07-12 16:10:40.597841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.104 [2024-07-12 16:10:40.597881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.104 [2024-07-12 16:10:40.597906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.104 [2024-07-12 16:10:40.597927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.104 [2024-07-12 16:10:40.597947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.104 [2024-07-12 16:10:40.597956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1782020 is same with the state(5) to be set 00:06:57.104 [2024-07-12 16:10:40.599105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:57.104 task offset: 8192 on job bdev=Nvme0n1 fails 00:06:57.104 00:06:57.105 Latency(us) 00:06:57.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.105 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:57.105 Job: Nvme0n1 ended in about 0.72 seconds with error 00:06:57.105 Verification LBA range: start 0x0 length 0x400 00:06:57.105 Nvme0n1 : 0.72 1504.94 94.06 88.53 0.00 39064.05 2234.18 43372.92 00:06:57.105 =================================================================================================================== 00:06:57.105 Total : 1504.94 94.06 88.53 0.00 39064.05 2234.18 43372.92 00:06:57.105 [2024-07-12 16:10:40.601125] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.105 [2024-07-12 16:10:40.601148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782020 (9): Bad file descriptor 00:06:57.105 [2024-07-12 16:10:40.604144] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64736 00:06:58.040 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64736) - No such process 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:58.040 { 00:06:58.040 "params": { 00:06:58.040 "name": "Nvme$subsystem", 00:06:58.040 "trtype": "$TEST_TRANSPORT", 00:06:58.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:58.040 "adrfam": "ipv4", 00:06:58.040 "trsvcid": "$NVMF_PORT", 00:06:58.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:58.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:58.040 "hdgst": ${hdgst:-false}, 00:06:58.040 "ddgst": ${ddgst:-false} 00:06:58.040 }, 00:06:58.040 "method": "bdev_nvme_attach_controller" 00:06:58.040 } 00:06:58.040 EOF 00:06:58.040 )") 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:58.040 16:10:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:58.040 "params": { 00:06:58.040 "name": "Nvme0", 00:06:58.040 "trtype": "tcp", 00:06:58.040 "traddr": "10.0.0.2", 00:06:58.040 "adrfam": "ipv4", 00:06:58.040 "trsvcid": "4420", 00:06:58.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:58.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:58.040 "hdgst": false, 00:06:58.040 "ddgst": false 00:06:58.040 }, 00:06:58.040 "method": "bdev_nvme_attach_controller" 00:06:58.040 }' 00:06:58.040 [2024-07-12 16:10:41.654812] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:06:58.040 [2024-07-12 16:10:41.654906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64774 ] 00:06:58.298 [2024-07-12 16:10:41.789280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.298 [2024-07-12 16:10:41.846559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.298 [2024-07-12 16:10:41.885601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.298 Running I/O for 1 seconds... 00:06:59.670 00:06:59.670 Latency(us) 00:06:59.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.670 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:59.670 Verification LBA range: start 0x0 length 0x400 00:06:59.670 Nvme0n1 : 1.04 1599.42 99.96 0.00 0.00 39254.42 3723.64 36700.16 00:06:59.670 =================================================================================================================== 00:06:59.670 Total : 1599.42 99.96 0.00 0.00 39254.42 3723.64 36700.16 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:59.670 rmmod nvme_tcp 00:06:59.670 rmmod nvme_fabrics 00:06:59.670 rmmod nvme_keyring 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64682 ']' 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64682 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 64682 ']' 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 64682 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64682 00:06:59.670 killing process with pid 64682 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64682' 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 64682 00:06:59.670 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 64682 00:06:59.928 [2024-07-12 16:10:43.423724] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:59.928 00:06:59.928 real 0m5.749s 00:06:59.928 user 0m22.345s 00:06:59.928 sys 0m1.348s 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.928 16:10:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.928 ************************************ 00:06:59.928 END TEST nvmf_host_management 00:06:59.928 ************************************ 00:06:59.928 16:10:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:59.928 16:10:43 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.928 16:10:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:59.928 16:10:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.928 16:10:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.928 ************************************ 00:06:59.928 START TEST nvmf_lvol 00:06:59.928 ************************************ 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.928 * Looking for test storage... 00:06:59.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:59.928 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:00.186 Cannot find device "nvmf_tgt_br" 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.186 Cannot find device "nvmf_tgt_br2" 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:00.186 Cannot find device "nvmf_tgt_br" 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:00.186 Cannot find device "nvmf_tgt_br2" 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:00.186 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:00.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:07:00.444 00:07:00.444 --- 10.0.0.2 ping statistics --- 00:07:00.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.444 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:00.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:00.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:00.444 00:07:00.444 --- 10.0.0.3 ping statistics --- 00:07:00.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.444 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:00.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:00.444 00:07:00.444 --- 10.0.0.1 ping statistics --- 00:07:00.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.444 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64989 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64989 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 64989 ']' 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.444 16:10:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.444 [2024-07-12 16:10:44.044900] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:00.444 [2024-07-12 16:10:44.044995] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.702 [2024-07-12 16:10:44.185841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.702 [2024-07-12 16:10:44.239631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.702 [2024-07-12 16:10:44.239698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.702 [2024-07-12 16:10:44.239724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.702 [2024-07-12 16:10:44.239731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.702 [2024-07-12 16:10:44.239738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.702 [2024-07-12 16:10:44.240179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.702 [2024-07-12 16:10:44.240537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.702 [2024-07-12 16:10:44.240575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.702 [2024-07-12 16:10:44.270280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.702 16:10:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:00.960 [2024-07-12 16:10:44.560379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.960 16:10:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.217 16:10:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:01.217 16:10:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.475 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:01.475 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:01.732 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:01.990 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=31a0e0ce-4f5e-4164-8e6a-1df22a75cb60 00:07:01.990 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31a0e0ce-4f5e-4164-8e6a-1df22a75cb60 lvol 20 00:07:02.247 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=96c1993b-bc50-44b2-8618-84b08105db80 00:07:02.247 16:10:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:02.504 16:10:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96c1993b-bc50-44b2-8618-84b08105db80 00:07:02.762 16:10:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:03.019 [2024-07-12 16:10:46.558173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.019 16:10:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.277 16:10:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65054 00:07:03.277 16:10:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:03.277 16:10:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:04.209 16:10:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 96c1993b-bc50-44b2-8618-84b08105db80 MY_SNAPSHOT 00:07:04.466 16:10:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=07fde390-abb0-4b07-8651-4df67e04d961 00:07:04.466 16:10:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 96c1993b-bc50-44b2-8618-84b08105db80 30 00:07:04.724 16:10:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 07fde390-abb0-4b07-8651-4df67e04d961 MY_CLONE 00:07:04.982 16:10:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=93ff1be1-f0b7-441a-9bd1-47d9fbdd59cd 00:07:04.982 16:10:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 93ff1be1-f0b7-441a-9bd1-47d9fbdd59cd 00:07:05.548 16:10:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65054 00:07:13.656 Initializing NVMe Controllers 00:07:13.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:13.656 Controller IO queue size 128, less than required. 00:07:13.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:13.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:13.656 Initialization complete. Launching workers. 00:07:13.656 ======================================================== 00:07:13.656 Latency(us) 00:07:13.656 Device Information : IOPS MiB/s Average min max 00:07:13.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10678.80 41.71 11996.84 1822.00 53713.60 00:07:13.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10754.40 42.01 11904.13 3072.24 93436.88 00:07:13.656 ======================================================== 00:07:13.656 Total : 21433.20 83.72 11950.32 1822.00 93436.88 00:07:13.656 00:07:13.656 16:10:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.656 16:10:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 96c1993b-bc50-44b2-8618-84b08105db80 00:07:13.914 16:10:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31a0e0ce-4f5e-4164-8e6a-1df22a75cb60 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.200 rmmod nvme_tcp 00:07:14.200 rmmod nvme_fabrics 00:07:14.200 rmmod nvme_keyring 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64989 ']' 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64989 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 64989 ']' 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 64989 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.200 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64989 00:07:14.468 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.468 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.468 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64989' 00:07:14.468 killing process with pid 64989 00:07:14.468 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 64989 00:07:14.468 16:10:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 64989 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:14.468 ************************************ 00:07:14.468 END TEST nvmf_lvol 00:07:14.468 ************************************ 00:07:14.468 00:07:14.468 real 0m14.605s 00:07:14.468 user 1m1.929s 00:07:14.468 sys 0m4.019s 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.468 16:10:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:14.468 16:10:58 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:14.468 16:10:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.468 16:10:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.468 16:10:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.468 ************************************ 00:07:14.468 START TEST nvmf_lvs_grow 00:07:14.468 ************************************ 00:07:14.468 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:14.727 * Looking for test storage... 00:07:14.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:14.727 Cannot find device "nvmf_tgt_br" 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:14.727 Cannot find device "nvmf_tgt_br2" 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:14.727 Cannot find device "nvmf_tgt_br" 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:14.727 Cannot find device "nvmf_tgt_br2" 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:14.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:14.727 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:14.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:14.986 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:14.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:07:14.987 00:07:14.987 --- 10.0.0.2 ping statistics --- 00:07:14.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.987 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:14.987 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:14.987 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:14.987 00:07:14.987 --- 10.0.0.3 ping statistics --- 00:07:14.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.987 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:14.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:14.987 00:07:14.987 --- 10.0.0.1 ping statistics --- 00:07:14.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.987 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65374 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65374 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65374 ']' 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.987 16:10:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 [2024-07-12 16:10:58.719441] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:15.246 [2024-07-12 16:10:58.719533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.246 [2024-07-12 16:10:58.860741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.246 [2024-07-12 16:10:58.923115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.246 [2024-07-12 16:10:58.923180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.246 [2024-07-12 16:10:58.923194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.246 [2024-07-12 16:10:58.923203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.246 [2024-07-12 16:10:58.923210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.246 [2024-07-12 16:10:58.923241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.246 [2024-07-12 16:10:58.954238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.181 16:10:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.440 [2024-07-12 16:10:59.917311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.440 ************************************ 00:07:16.440 START TEST lvs_grow_clean 00:07:16.440 ************************************ 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:16.440 16:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:16.698 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:16.698 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:16.957 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c587d499-12d6-456b-9eaa-03ec3651719d 00:07:16.957 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:16.957 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:17.216 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:17.217 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:17.217 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c587d499-12d6-456b-9eaa-03ec3651719d lvol 150 00:07:17.476 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=188a6163-ac8b-46f5-830d-fbc66fdc8565 00:07:17.476 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:17.476 16:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:17.476 [2024-07-12 16:11:01.146671] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:17.476 [2024-07-12 16:11:01.146767] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:17.476 true 00:07:17.476 16:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:17.476 16:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:17.735 16:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:17.735 16:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.994 16:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 188a6163-ac8b-46f5-830d-fbc66fdc8565 00:07:18.253 16:11:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.512 [2024-07-12 16:11:02.027265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.512 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65462 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65462 /var/tmp/bdevperf.sock 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65462 ']' 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.772 16:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:18.772 [2024-07-12 16:11:02.354637] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:18.772 [2024-07-12 16:11:02.354743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65462 ] 00:07:18.772 [2024-07-12 16:11:02.486748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.031 [2024-07-12 16:11:02.552291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.031 [2024-07-12 16:11:02.580294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.596 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.596 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:19.596 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:19.854 Nvme0n1 00:07:20.113 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:20.371 [ 00:07:20.371 { 00:07:20.371 "name": "Nvme0n1", 00:07:20.371 "aliases": [ 00:07:20.371 "188a6163-ac8b-46f5-830d-fbc66fdc8565" 00:07:20.371 ], 00:07:20.371 "product_name": "NVMe disk", 00:07:20.371 "block_size": 4096, 00:07:20.371 "num_blocks": 38912, 00:07:20.371 "uuid": "188a6163-ac8b-46f5-830d-fbc66fdc8565", 00:07:20.371 "assigned_rate_limits": { 00:07:20.371 "rw_ios_per_sec": 0, 00:07:20.371 "rw_mbytes_per_sec": 0, 00:07:20.371 "r_mbytes_per_sec": 0, 00:07:20.371 "w_mbytes_per_sec": 0 00:07:20.371 }, 00:07:20.371 "claimed": false, 00:07:20.371 "zoned": false, 00:07:20.371 "supported_io_types": { 00:07:20.371 "read": true, 00:07:20.371 "write": true, 00:07:20.371 "unmap": true, 00:07:20.371 "flush": true, 00:07:20.371 "reset": true, 00:07:20.371 "nvme_admin": true, 00:07:20.371 "nvme_io": true, 00:07:20.371 "nvme_io_md": false, 00:07:20.371 "write_zeroes": true, 00:07:20.371 "zcopy": false, 00:07:20.371 "get_zone_info": false, 00:07:20.371 "zone_management": false, 00:07:20.371 "zone_append": false, 00:07:20.371 "compare": true, 00:07:20.371 "compare_and_write": true, 00:07:20.371 "abort": true, 00:07:20.371 "seek_hole": false, 00:07:20.372 "seek_data": false, 00:07:20.372 "copy": true, 00:07:20.372 "nvme_iov_md": false 00:07:20.372 }, 00:07:20.372 "memory_domains": [ 00:07:20.372 { 00:07:20.372 "dma_device_id": "system", 00:07:20.372 "dma_device_type": 1 00:07:20.372 } 00:07:20.372 ], 00:07:20.372 "driver_specific": { 00:07:20.372 "nvme": [ 00:07:20.372 { 00:07:20.372 "trid": { 00:07:20.372 "trtype": "TCP", 00:07:20.372 "adrfam": "IPv4", 00:07:20.372 "traddr": "10.0.0.2", 00:07:20.372 "trsvcid": "4420", 00:07:20.372 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:20.372 }, 00:07:20.372 "ctrlr_data": { 00:07:20.372 "cntlid": 1, 00:07:20.372 "vendor_id": "0x8086", 00:07:20.372 "model_number": "SPDK bdev Controller", 00:07:20.372 "serial_number": "SPDK0", 00:07:20.372 "firmware_revision": "24.09", 00:07:20.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.372 "oacs": { 00:07:20.372 "security": 0, 00:07:20.372 "format": 0, 00:07:20.372 "firmware": 0, 00:07:20.372 "ns_manage": 0 00:07:20.372 }, 00:07:20.372 "multi_ctrlr": true, 00:07:20.372 "ana_reporting": false 00:07:20.372 }, 00:07:20.372 "vs": { 00:07:20.372 "nvme_version": "1.3" 00:07:20.372 }, 00:07:20.372 "ns_data": { 00:07:20.372 "id": 1, 00:07:20.372 "can_share": true 00:07:20.372 } 00:07:20.372 } 00:07:20.372 ], 00:07:20.372 "mp_policy": "active_passive" 00:07:20.372 } 00:07:20.372 } 00:07:20.372 ] 00:07:20.372 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65480 00:07:20.372 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:20.372 16:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:20.372 Running I/O for 10 seconds... 00:07:21.309 Latency(us) 00:07:21.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.309 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:21.309 =================================================================================================================== 00:07:21.309 Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:21.309 00:07:22.245 16:11:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:22.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.504 Nvme0n1 : 2.00 6540.00 25.55 0.00 0.00 0.00 0.00 0.00 00:07:22.504 =================================================================================================================== 00:07:22.504 Total : 6540.00 25.55 0.00 0.00 0.00 0.00 0.00 00:07:22.504 00:07:22.504 true 00:07:22.504 16:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:22.504 16:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:23.071 16:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:23.071 16:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:23.071 16:11:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65480 00:07:23.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.329 Nvme0n1 : 3.00 6561.33 25.63 0.00 0.00 0.00 0.00 0.00 00:07:23.329 =================================================================================================================== 00:07:23.329 Total : 6561.33 25.63 0.00 0.00 0.00 0.00 0.00 00:07:23.329 00:07:24.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.264 Nvme0n1 : 4.00 6540.25 25.55 0.00 0.00 0.00 0.00 0.00 00:07:24.264 =================================================================================================================== 00:07:24.264 Total : 6540.25 25.55 0.00 0.00 0.00 0.00 0.00 00:07:24.264 00:07:25.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.638 Nvme0n1 : 5.00 6553.00 25.60 0.00 0.00 0.00 0.00 0.00 00:07:25.638 =================================================================================================================== 00:07:25.638 Total : 6553.00 25.60 0.00 0.00 0.00 0.00 0.00 00:07:25.638 00:07:26.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.572 Nvme0n1 : 6.00 6540.33 25.55 0.00 0.00 0.00 0.00 0.00 00:07:26.572 =================================================================================================================== 00:07:26.572 Total : 6540.33 25.55 0.00 0.00 0.00 0.00 0.00 00:07:26.572 00:07:27.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.505 Nvme0n1 : 7.00 6531.29 25.51 0.00 0.00 0.00 0.00 0.00 00:07:27.505 =================================================================================================================== 00:07:27.505 Total : 6531.29 25.51 0.00 0.00 0.00 0.00 0.00 00:07:27.505 00:07:28.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.444 Nvme0n1 : 8.00 6540.38 25.55 0.00 0.00 0.00 0.00 0.00 00:07:28.444 =================================================================================================================== 00:07:28.444 Total : 6540.38 25.55 0.00 0.00 0.00 0.00 0.00 00:07:28.444 00:07:29.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.376 Nvme0n1 : 9.00 6533.33 25.52 0.00 0.00 0.00 0.00 0.00 00:07:29.376 =================================================================================================================== 00:07:29.376 Total : 6533.33 25.52 0.00 0.00 0.00 0.00 0.00 00:07:29.376 00:07:30.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.307 Nvme0n1 : 10.00 6515.00 25.45 0.00 0.00 0.00 0.00 0.00 00:07:30.307 =================================================================================================================== 00:07:30.307 Total : 6515.00 25.45 0.00 0.00 0.00 0.00 0.00 00:07:30.307 00:07:30.307 00:07:30.307 Latency(us) 00:07:30.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.307 Nvme0n1 : 10.01 6518.61 25.46 0.00 0.00 19631.41 17277.67 47900.86 00:07:30.307 =================================================================================================================== 00:07:30.307 Total : 6518.61 25.46 0.00 0.00 19631.41 17277.67 47900.86 00:07:30.307 0 00:07:30.307 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65462 00:07:30.307 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65462 ']' 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65462 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65462 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65462' 00:07:30.308 killing process with pid 65462 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65462 00:07:30.308 Received shutdown signal, test time was about 10.000000 seconds 00:07:30.308 00:07:30.308 Latency(us) 00:07:30.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.308 =================================================================================================================== 00:07:30.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:30.308 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65462 00:07:30.566 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.825 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.084 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:31.084 16:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:31.342 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:31.342 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:31.342 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:31.600 [2024-07-12 16:11:15.289475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:31.858 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:32.117 request: 00:07:32.117 { 00:07:32.117 "uuid": "c587d499-12d6-456b-9eaa-03ec3651719d", 00:07:32.117 "method": "bdev_lvol_get_lvstores", 00:07:32.117 "req_id": 1 00:07:32.117 } 00:07:32.117 Got JSON-RPC error response 00:07:32.117 response: 00:07:32.117 { 00:07:32.117 "code": -19, 00:07:32.117 "message": "No such device" 00:07:32.117 } 00:07:32.117 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:07:32.117 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:32.117 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:32.117 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:32.117 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.376 aio_bdev 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 188a6163-ac8b-46f5-830d-fbc66fdc8565 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=188a6163-ac8b-46f5-830d-fbc66fdc8565 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:32.376 16:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:32.634 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 188a6163-ac8b-46f5-830d-fbc66fdc8565 -t 2000 00:07:32.634 [ 00:07:32.634 { 00:07:32.634 "name": "188a6163-ac8b-46f5-830d-fbc66fdc8565", 00:07:32.634 "aliases": [ 00:07:32.634 "lvs/lvol" 00:07:32.634 ], 00:07:32.634 "product_name": "Logical Volume", 00:07:32.634 "block_size": 4096, 00:07:32.634 "num_blocks": 38912, 00:07:32.634 "uuid": "188a6163-ac8b-46f5-830d-fbc66fdc8565", 00:07:32.634 "assigned_rate_limits": { 00:07:32.634 "rw_ios_per_sec": 0, 00:07:32.634 "rw_mbytes_per_sec": 0, 00:07:32.634 "r_mbytes_per_sec": 0, 00:07:32.634 "w_mbytes_per_sec": 0 00:07:32.634 }, 00:07:32.634 "claimed": false, 00:07:32.634 "zoned": false, 00:07:32.634 "supported_io_types": { 00:07:32.634 "read": true, 00:07:32.634 "write": true, 00:07:32.634 "unmap": true, 00:07:32.634 "flush": false, 00:07:32.634 "reset": true, 00:07:32.634 "nvme_admin": false, 00:07:32.634 "nvme_io": false, 00:07:32.634 "nvme_io_md": false, 00:07:32.634 "write_zeroes": true, 00:07:32.634 "zcopy": false, 00:07:32.634 "get_zone_info": false, 00:07:32.634 "zone_management": false, 00:07:32.634 "zone_append": false, 00:07:32.634 "compare": false, 00:07:32.634 "compare_and_write": false, 00:07:32.634 "abort": false, 00:07:32.634 "seek_hole": true, 00:07:32.634 "seek_data": true, 00:07:32.634 "copy": false, 00:07:32.634 "nvme_iov_md": false 00:07:32.634 }, 00:07:32.634 "driver_specific": { 00:07:32.634 "lvol": { 00:07:32.634 "lvol_store_uuid": "c587d499-12d6-456b-9eaa-03ec3651719d", 00:07:32.634 "base_bdev": "aio_bdev", 00:07:32.634 "thin_provision": false, 00:07:32.634 "num_allocated_clusters": 38, 00:07:32.634 "snapshot": false, 00:07:32.634 "clone": false, 00:07:32.634 "esnap_clone": false 00:07:32.634 } 00:07:32.634 } 00:07:32.634 } 00:07:32.634 ] 00:07:32.634 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:07:32.634 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:32.635 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:32.893 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:32.893 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:32.893 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:33.152 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:33.152 16:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 188a6163-ac8b-46f5-830d-fbc66fdc8565 00:07:33.411 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c587d499-12d6-456b-9eaa-03ec3651719d 00:07:33.669 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:33.928 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:34.187 00:07:34.187 real 0m17.877s 00:07:34.187 user 0m16.950s 00:07:34.187 sys 0m2.368s 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.187 ************************************ 00:07:34.187 END TEST lvs_grow_clean 00:07:34.187 ************************************ 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.187 ************************************ 00:07:34.187 START TEST lvs_grow_dirty 00:07:34.187 ************************************ 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:34.187 16:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.446 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:34.446 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:34.706 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=988f8036-3550-458f-8151-c6d42ea24b5c 00:07:34.706 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:34.706 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:34.965 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.965 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.965 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 988f8036-3550-458f-8151-c6d42ea24b5c lvol 150 00:07:35.224 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=96682130-5807-4a08-b71d-030fa5aedbd9 00:07:35.224 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:35.224 16:11:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:35.482 [2024-07-12 16:11:19.138673] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:35.482 [2024-07-12 16:11:19.138734] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:35.482 true 00:07:35.482 16:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:35.482 16:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:35.741 16:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:35.741 16:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.000 16:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96682130-5807-4a08-b71d-030fa5aedbd9 00:07:36.258 16:11:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.516 [2024-07-12 16:11:20.019180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.516 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65725 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65725 /var/tmp/bdevperf.sock 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 65725 ']' 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.775 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 [2024-07-12 16:11:20.296248] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:36.775 [2024-07-12 16:11:20.296341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65725 ] 00:07:36.775 [2024-07-12 16:11:20.429650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.775 [2024-07-12 16:11:20.484022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.033 [2024-07-12 16:11:20.512833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.034 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.034 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:37.034 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:37.292 Nvme0n1 00:07:37.292 16:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:37.551 [ 00:07:37.551 { 00:07:37.551 "name": "Nvme0n1", 00:07:37.551 "aliases": [ 00:07:37.551 "96682130-5807-4a08-b71d-030fa5aedbd9" 00:07:37.551 ], 00:07:37.551 "product_name": "NVMe disk", 00:07:37.551 "block_size": 4096, 00:07:37.551 "num_blocks": 38912, 00:07:37.551 "uuid": "96682130-5807-4a08-b71d-030fa5aedbd9", 00:07:37.551 "assigned_rate_limits": { 00:07:37.551 "rw_ios_per_sec": 0, 00:07:37.551 "rw_mbytes_per_sec": 0, 00:07:37.551 "r_mbytes_per_sec": 0, 00:07:37.551 "w_mbytes_per_sec": 0 00:07:37.551 }, 00:07:37.551 "claimed": false, 00:07:37.551 "zoned": false, 00:07:37.551 "supported_io_types": { 00:07:37.551 "read": true, 00:07:37.551 "write": true, 00:07:37.551 "unmap": true, 00:07:37.551 "flush": true, 00:07:37.551 "reset": true, 00:07:37.551 "nvme_admin": true, 00:07:37.551 "nvme_io": true, 00:07:37.551 "nvme_io_md": false, 00:07:37.551 "write_zeroes": true, 00:07:37.551 "zcopy": false, 00:07:37.551 "get_zone_info": false, 00:07:37.551 "zone_management": false, 00:07:37.551 "zone_append": false, 00:07:37.551 "compare": true, 00:07:37.551 "compare_and_write": true, 00:07:37.551 "abort": true, 00:07:37.551 "seek_hole": false, 00:07:37.551 "seek_data": false, 00:07:37.551 "copy": true, 00:07:37.551 "nvme_iov_md": false 00:07:37.551 }, 00:07:37.551 "memory_domains": [ 00:07:37.551 { 00:07:37.551 "dma_device_id": "system", 00:07:37.551 "dma_device_type": 1 00:07:37.551 } 00:07:37.551 ], 00:07:37.551 "driver_specific": { 00:07:37.551 "nvme": [ 00:07:37.551 { 00:07:37.551 "trid": { 00:07:37.551 "trtype": "TCP", 00:07:37.551 "adrfam": "IPv4", 00:07:37.551 "traddr": "10.0.0.2", 00:07:37.551 "trsvcid": "4420", 00:07:37.551 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:37.551 }, 00:07:37.551 "ctrlr_data": { 00:07:37.551 "cntlid": 1, 00:07:37.551 "vendor_id": "0x8086", 00:07:37.551 "model_number": "SPDK bdev Controller", 00:07:37.551 "serial_number": "SPDK0", 00:07:37.551 "firmware_revision": "24.09", 00:07:37.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.551 "oacs": { 00:07:37.551 "security": 0, 00:07:37.551 "format": 0, 00:07:37.551 "firmware": 0, 00:07:37.551 "ns_manage": 0 00:07:37.551 }, 00:07:37.551 "multi_ctrlr": true, 00:07:37.551 "ana_reporting": false 00:07:37.551 }, 00:07:37.551 "vs": { 00:07:37.551 "nvme_version": "1.3" 00:07:37.551 }, 00:07:37.551 "ns_data": { 00:07:37.552 "id": 1, 00:07:37.552 "can_share": true 00:07:37.552 } 00:07:37.552 } 00:07:37.552 ], 00:07:37.552 "mp_policy": "active_passive" 00:07:37.552 } 00:07:37.552 } 00:07:37.552 ] 00:07:37.552 16:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65736 00:07:37.552 16:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.552 16:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:37.552 Running I/O for 10 seconds... 00:07:38.488 Latency(us) 00:07:38.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.488 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:07:38.488 =================================================================================================================== 00:07:38.488 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:07:38.488 00:07:39.424 16:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:39.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.682 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:07:39.682 =================================================================================================================== 00:07:39.682 Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:07:39.682 00:07:39.682 true 00:07:39.682 16:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:39.682 16:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.248 16:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.248 16:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.248 16:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65736 00:07:40.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.506 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:07:40.506 =================================================================================================================== 00:07:40.506 Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:07:40.506 00:07:41.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.441 Nvme0n1 : 4.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:07:41.441 =================================================================================================================== 00:07:41.441 Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:07:41.441 00:07:42.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.816 Nvme0n1 : 5.00 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:07:42.816 =================================================================================================================== 00:07:42.816 Total : 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:07:42.816 00:07:43.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.753 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:07:43.753 =================================================================================================================== 00:07:43.753 Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:07:43.753 00:07:44.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.689 Nvme0n1 : 7.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:44.689 =================================================================================================================== 00:07:44.689 Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:44.689 00:07:45.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.653 Nvme0n1 : 8.00 6502.50 25.40 0.00 0.00 0.00 0.00 0.00 00:07:45.653 =================================================================================================================== 00:07:45.653 Total : 6502.50 25.40 0.00 0.00 0.00 0.00 0.00 00:07:45.653 00:07:46.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.587 Nvme0n1 : 9.00 6471.44 25.28 0.00 0.00 0.00 0.00 0.00 00:07:46.587 =================================================================================================================== 00:07:46.588 Total : 6471.44 25.28 0.00 0.00 0.00 0.00 0.00 00:07:46.588 00:07:47.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.521 Nvme0n1 : 10.00 6459.30 25.23 0.00 0.00 0.00 0.00 0.00 00:07:47.521 =================================================================================================================== 00:07:47.521 Total : 6459.30 25.23 0.00 0.00 0.00 0.00 0.00 00:07:47.521 00:07:47.521 00:07:47.521 Latency(us) 00:07:47.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.521 Nvme0n1 : 10.01 6462.82 25.25 0.00 0.00 19799.62 14894.55 132501.88 00:07:47.521 =================================================================================================================== 00:07:47.521 Total : 6462.82 25.25 0.00 0.00 19799.62 14894.55 132501.88 00:07:47.521 0 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65725 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 65725 ']' 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 65725 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65725 00:07:47.521 killing process with pid 65725 00:07:47.521 Received shutdown signal, test time was about 10.000000 seconds 00:07:47.521 00:07:47.521 Latency(us) 00:07:47.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.521 =================================================================================================================== 00:07:47.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65725' 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 65725 00:07:47.521 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 65725 00:07:47.778 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.036 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.294 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.294 16:11:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65374 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65374 00:07:48.553 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65374 Killed "${NVMF_APP[@]}" "$@" 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65874 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65874 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 65874 ']' 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.553 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 [2024-07-12 16:11:32.284731] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:48.811 [2024-07-12 16:11:32.284834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.811 [2024-07-12 16:11:32.421728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.811 [2024-07-12 16:11:32.482719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.811 [2024-07-12 16:11:32.482790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.811 [2024-07-12 16:11:32.482818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.811 [2024-07-12 16:11:32.482827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.812 [2024-07-12 16:11:32.482834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.812 [2024-07-12 16:11:32.482859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.812 [2024-07-12 16:11:32.515179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.070 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.329 [2024-07-12 16:11:32.805840] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:49.329 [2024-07-12 16:11:32.806141] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:49.329 [2024-07-12 16:11:32.806342] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 96682130-5807-4a08-b71d-030fa5aedbd9 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=96682130-5807-4a08-b71d-030fa5aedbd9 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:49.329 16:11:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.588 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 96682130-5807-4a08-b71d-030fa5aedbd9 -t 2000 00:07:49.848 [ 00:07:49.848 { 00:07:49.848 "name": "96682130-5807-4a08-b71d-030fa5aedbd9", 00:07:49.848 "aliases": [ 00:07:49.848 "lvs/lvol" 00:07:49.848 ], 00:07:49.848 "product_name": "Logical Volume", 00:07:49.848 "block_size": 4096, 00:07:49.848 "num_blocks": 38912, 00:07:49.848 "uuid": "96682130-5807-4a08-b71d-030fa5aedbd9", 00:07:49.848 "assigned_rate_limits": { 00:07:49.848 "rw_ios_per_sec": 0, 00:07:49.848 "rw_mbytes_per_sec": 0, 00:07:49.848 "r_mbytes_per_sec": 0, 00:07:49.848 "w_mbytes_per_sec": 0 00:07:49.848 }, 00:07:49.848 "claimed": false, 00:07:49.848 "zoned": false, 00:07:49.848 "supported_io_types": { 00:07:49.848 "read": true, 00:07:49.848 "write": true, 00:07:49.848 "unmap": true, 00:07:49.848 "flush": false, 00:07:49.848 "reset": true, 00:07:49.848 "nvme_admin": false, 00:07:49.848 "nvme_io": false, 00:07:49.848 "nvme_io_md": false, 00:07:49.848 "write_zeroes": true, 00:07:49.848 "zcopy": false, 00:07:49.848 "get_zone_info": false, 00:07:49.848 "zone_management": false, 00:07:49.848 "zone_append": false, 00:07:49.848 "compare": false, 00:07:49.848 "compare_and_write": false, 00:07:49.848 "abort": false, 00:07:49.848 "seek_hole": true, 00:07:49.848 "seek_data": true, 00:07:49.848 "copy": false, 00:07:49.848 "nvme_iov_md": false 00:07:49.848 }, 00:07:49.848 "driver_specific": { 00:07:49.848 "lvol": { 00:07:49.848 "lvol_store_uuid": "988f8036-3550-458f-8151-c6d42ea24b5c", 00:07:49.848 "base_bdev": "aio_bdev", 00:07:49.848 "thin_provision": false, 00:07:49.848 "num_allocated_clusters": 38, 00:07:49.848 "snapshot": false, 00:07:49.848 "clone": false, 00:07:49.848 "esnap_clone": false 00:07:49.848 } 00:07:49.848 } 00:07:49.848 } 00:07:49.848 ] 00:07:49.848 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:07:49.848 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:49.848 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:50.106 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:50.106 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:50.106 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:50.364 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:50.364 16:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.622 [2024-07-12 16:11:34.103601] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:50.622 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:50.880 request: 00:07:50.880 { 00:07:50.880 "uuid": "988f8036-3550-458f-8151-c6d42ea24b5c", 00:07:50.880 "method": "bdev_lvol_get_lvstores", 00:07:50.880 "req_id": 1 00:07:50.880 } 00:07:50.880 Got JSON-RPC error response 00:07:50.880 response: 00:07:50.880 { 00:07:50.880 "code": -19, 00:07:50.880 "message": "No such device" 00:07:50.880 } 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.880 aio_bdev 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96682130-5807-4a08-b71d-030fa5aedbd9 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=96682130-5807-4a08-b71d-030fa5aedbd9 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:50.880 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:07:50.881 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:50.881 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:50.881 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:51.139 16:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 96682130-5807-4a08-b71d-030fa5aedbd9 -t 2000 00:07:51.396 [ 00:07:51.396 { 00:07:51.396 "name": "96682130-5807-4a08-b71d-030fa5aedbd9", 00:07:51.397 "aliases": [ 00:07:51.397 "lvs/lvol" 00:07:51.397 ], 00:07:51.397 "product_name": "Logical Volume", 00:07:51.397 "block_size": 4096, 00:07:51.397 "num_blocks": 38912, 00:07:51.397 "uuid": "96682130-5807-4a08-b71d-030fa5aedbd9", 00:07:51.397 "assigned_rate_limits": { 00:07:51.397 "rw_ios_per_sec": 0, 00:07:51.397 "rw_mbytes_per_sec": 0, 00:07:51.397 "r_mbytes_per_sec": 0, 00:07:51.397 "w_mbytes_per_sec": 0 00:07:51.397 }, 00:07:51.397 "claimed": false, 00:07:51.397 "zoned": false, 00:07:51.397 "supported_io_types": { 00:07:51.397 "read": true, 00:07:51.397 "write": true, 00:07:51.397 "unmap": true, 00:07:51.397 "flush": false, 00:07:51.397 "reset": true, 00:07:51.397 "nvme_admin": false, 00:07:51.397 "nvme_io": false, 00:07:51.397 "nvme_io_md": false, 00:07:51.397 "write_zeroes": true, 00:07:51.397 "zcopy": false, 00:07:51.397 "get_zone_info": false, 00:07:51.397 "zone_management": false, 00:07:51.397 "zone_append": false, 00:07:51.397 "compare": false, 00:07:51.397 "compare_and_write": false, 00:07:51.397 "abort": false, 00:07:51.397 "seek_hole": true, 00:07:51.397 "seek_data": true, 00:07:51.397 "copy": false, 00:07:51.397 "nvme_iov_md": false 00:07:51.397 }, 00:07:51.397 "driver_specific": { 00:07:51.397 "lvol": { 00:07:51.397 "lvol_store_uuid": "988f8036-3550-458f-8151-c6d42ea24b5c", 00:07:51.397 "base_bdev": "aio_bdev", 00:07:51.397 "thin_provision": false, 00:07:51.397 "num_allocated_clusters": 38, 00:07:51.397 "snapshot": false, 00:07:51.397 "clone": false, 00:07:51.397 "esnap_clone": false 00:07:51.397 } 00:07:51.397 } 00:07:51.397 } 00:07:51.397 ] 00:07:51.397 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:07:51.397 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:51.397 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:51.655 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:51.655 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:51.655 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:51.913 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:51.913 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 96682130-5807-4a08-b71d-030fa5aedbd9 00:07:52.171 16:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 988f8036-3550-458f-8151-c6d42ea24b5c 00:07:52.429 16:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.687 16:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.945 00:07:52.945 real 0m18.687s 00:07:52.945 user 0m38.504s 00:07:52.945 sys 0m9.520s 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.945 ************************************ 00:07:52.945 END TEST lvs_grow_dirty 00:07:52.945 ************************************ 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:07:52.945 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:52.946 nvmf_trace.0 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.946 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.204 rmmod nvme_tcp 00:07:53.204 rmmod nvme_fabrics 00:07:53.204 rmmod nvme_keyring 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65874 ']' 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65874 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 65874 ']' 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 65874 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.204 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65874 00:07:53.462 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.462 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.462 killing process with pid 65874 00:07:53.462 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65874' 00:07:53.462 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 65874 00:07:53.462 16:11:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 65874 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:53.462 00:07:53.462 real 0m38.936s 00:07:53.462 user 1m0.866s 00:07:53.462 sys 0m12.592s 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.462 16:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.462 ************************************ 00:07:53.462 END TEST nvmf_lvs_grow 00:07:53.462 ************************************ 00:07:53.462 16:11:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.462 16:11:37 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:53.462 16:11:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.462 16:11:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.462 16:11:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.462 ************************************ 00:07:53.462 START TEST nvmf_bdev_io_wait 00:07:53.462 ************************************ 00:07:53.462 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:53.719 * Looking for test storage... 00:07:53.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:07:53.719 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.720 Cannot find device "nvmf_tgt_br" 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.720 Cannot find device "nvmf_tgt_br2" 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.720 Cannot find device "nvmf_tgt_br" 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.720 Cannot find device "nvmf_tgt_br2" 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.720 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.978 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:07:53.979 00:07:53.979 --- 10.0.0.2 ping statistics --- 00:07:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.979 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:53.979 00:07:53.979 --- 10.0.0.3 ping statistics --- 00:07:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.979 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:53.979 00:07:53.979 --- 10.0.0.1 ping statistics --- 00:07:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.979 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66171 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66171 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66171 ']' 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.979 16:11:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.979 [2024-07-12 16:11:37.693113] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:53.979 [2024-07-12 16:11:37.693192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.236 [2024-07-12 16:11:37.833592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.236 [2024-07-12 16:11:37.889731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.236 [2024-07-12 16:11:37.889804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.236 [2024-07-12 16:11:37.889831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.236 [2024-07-12 16:11:37.889839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.236 [2024-07-12 16:11:37.889845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.236 [2024-07-12 16:11:37.890287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.236 [2024-07-12 16:11:37.890555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.236 [2024-07-12 16:11:37.890713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.236 [2024-07-12 16:11:37.890719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.166 [2024-07-12 16:11:38.773324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.166 [2024-07-12 16:11:38.784185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:55.166 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 Malloc0 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 [2024-07-12 16:11:38.845106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66212 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66214 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:55.167 { 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme$subsystem", 00:07:55.167 "trtype": "$TEST_TRANSPORT", 00:07:55.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "$NVMF_PORT", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.167 "hdgst": ${hdgst:-false}, 00:07:55.167 "ddgst": ${ddgst:-false} 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 } 00:07:55.167 EOF 00:07:55.167 )") 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66216 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:55.167 { 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme$subsystem", 00:07:55.167 "trtype": "$TEST_TRANSPORT", 00:07:55.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "$NVMF_PORT", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.167 "hdgst": ${hdgst:-false}, 00:07:55.167 "ddgst": ${ddgst:-false} 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 } 00:07:55.167 EOF 00:07:55.167 )") 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66219 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:55.167 { 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme$subsystem", 00:07:55.167 "trtype": "$TEST_TRANSPORT", 00:07:55.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "$NVMF_PORT", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.167 "hdgst": ${hdgst:-false}, 00:07:55.167 "ddgst": ${ddgst:-false} 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 } 00:07:55.167 EOF 00:07:55.167 )") 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme1", 00:07:55.167 "trtype": "tcp", 00:07:55.167 "traddr": "10.0.0.2", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "4420", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.167 "hdgst": false, 00:07:55.167 "ddgst": false 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 }' 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:55.167 { 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme$subsystem", 00:07:55.167 "trtype": "$TEST_TRANSPORT", 00:07:55.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "$NVMF_PORT", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.167 "hdgst": ${hdgst:-false}, 00:07:55.167 "ddgst": ${ddgst:-false} 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 } 00:07:55.167 EOF 00:07:55.167 )") 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme1", 00:07:55.167 "trtype": "tcp", 00:07:55.167 "traddr": "10.0.0.2", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "4420", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.167 "hdgst": false, 00:07:55.167 "ddgst": false 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 }' 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme1", 00:07:55.167 "trtype": "tcp", 00:07:55.167 "traddr": "10.0.0.2", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "4420", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.167 "hdgst": false, 00:07:55.167 "ddgst": false 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 }' 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:55.167 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:55.167 "params": { 00:07:55.167 "name": "Nvme1", 00:07:55.167 "trtype": "tcp", 00:07:55.167 "traddr": "10.0.0.2", 00:07:55.167 "adrfam": "ipv4", 00:07:55.167 "trsvcid": "4420", 00:07:55.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.167 "hdgst": false, 00:07:55.167 "ddgst": false 00:07:55.167 }, 00:07:55.167 "method": "bdev_nvme_attach_controller" 00:07:55.167 }' 00:07:55.425 16:11:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66212 00:07:55.425 [2024-07-12 16:11:38.904712] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:55.425 [2024-07-12 16:11:38.904794] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:55.425 [2024-07-12 16:11:38.920096] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:55.425 [2024-07-12 16:11:38.920498] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:55.425 [2024-07-12 16:11:38.930738] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:55.425 [2024-07-12 16:11:38.930810] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:55.425 [2024-07-12 16:11:38.941839] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:55.425 [2024-07-12 16:11:38.941964] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:55.425 [2024-07-12 16:11:39.084198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.425 [2024-07-12 16:11:39.128637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.425 [2024-07-12 16:11:39.139442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.683 [2024-07-12 16:11:39.170705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.683 [2024-07-12 16:11:39.170908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.683 [2024-07-12 16:11:39.184406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:55.683 [2024-07-12 16:11:39.214721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.683 [2024-07-12 16:11:39.216171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.683 [2024-07-12 16:11:39.225996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:55.683 [2024-07-12 16:11:39.258369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.683 Running I/O for 1 seconds... 00:07:55.683 [2024-07-12 16:11:39.268838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:55.683 [2024-07-12 16:11:39.299906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.683 Running I/O for 1 seconds... 00:07:55.683 Running I/O for 1 seconds... 00:07:55.940 Running I/O for 1 seconds... 00:07:56.872 00:07:56.872 Latency(us) 00:07:56.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.872 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:56.872 Nvme1n1 : 1.01 9661.96 37.74 0.00 0.00 13187.49 7060.01 20733.21 00:07:56.872 =================================================================================================================== 00:07:56.872 Total : 9661.96 37.74 0.00 0.00 13187.49 7060.01 20733.21 00:07:56.872 00:07:56.872 Latency(us) 00:07:56.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.872 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:56.872 Nvme1n1 : 1.01 7707.29 30.11 0.00 0.00 16509.41 10545.34 25856.93 00:07:56.872 =================================================================================================================== 00:07:56.872 Total : 7707.29 30.11 0.00 0.00 16509.41 10545.34 25856.93 00:07:56.872 00:07:56.872 Latency(us) 00:07:56.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.872 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:56.872 Nvme1n1 : 1.01 8744.60 34.16 0.00 0.00 14579.55 6881.28 28120.90 00:07:56.872 =================================================================================================================== 00:07:56.872 Total : 8744.60 34.16 0.00 0.00 14579.55 6881.28 28120.90 00:07:56.872 00:07:56.872 Latency(us) 00:07:56.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.872 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:56.872 Nvme1n1 : 1.00 165656.60 647.10 0.00 0.00 769.93 351.88 1042.62 00:07:56.872 =================================================================================================================== 00:07:56.872 Total : 165656.60 647.10 0.00 0.00 769.93 351.88 1042.62 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66214 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66216 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66219 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.872 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.130 rmmod nvme_tcp 00:07:57.130 rmmod nvme_fabrics 00:07:57.130 rmmod nvme_keyring 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66171 ']' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66171 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66171 ']' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66171 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66171 00:07:57.130 killing process with pid 66171 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66171' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66171 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66171 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.130 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.131 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.131 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.131 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.131 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.131 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.483 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:57.483 00:07:57.483 real 0m3.702s 00:07:57.483 user 0m16.217s 00:07:57.483 sys 0m1.991s 00:07:57.483 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.483 16:11:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.483 ************************************ 00:07:57.483 END TEST nvmf_bdev_io_wait 00:07:57.483 ************************************ 00:07:57.483 16:11:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:57.483 16:11:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:57.483 16:11:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.483 16:11:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.483 16:11:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.483 ************************************ 00:07:57.483 START TEST nvmf_queue_depth 00:07:57.483 ************************************ 00:07:57.483 16:11:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:57.483 * Looking for test storage... 00:07:57.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.483 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:57.484 Cannot find device "nvmf_tgt_br" 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.484 Cannot find device "nvmf_tgt_br2" 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:57.484 Cannot find device "nvmf_tgt_br" 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:57.484 Cannot find device "nvmf_tgt_br2" 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:57.484 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:57.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:07:57.758 00:07:57.758 --- 10.0.0.2 ping statistics --- 00:07:57.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.758 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:57.758 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:57.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:57.759 00:07:57.759 --- 10.0.0.3 ping statistics --- 00:07:57.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.759 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:57.759 00:07:57.759 --- 10.0.0.1 ping statistics --- 00:07:57.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.759 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66418 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66418 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66418 ']' 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:57.759 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.759 [2024-07-12 16:11:41.428710] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:57.759 [2024-07-12 16:11:41.428802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.027 [2024-07-12 16:11:41.566179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.027 [2024-07-12 16:11:41.622334] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.027 [2024-07-12 16:11:41.622395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.027 [2024-07-12 16:11:41.622406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.027 [2024-07-12 16:11:41.622413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.027 [2024-07-12 16:11:41.622420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.027 [2024-07-12 16:11:41.622444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.027 [2024-07-12 16:11:41.651668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.027 [2024-07-12 16:11:41.744645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.027 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.285 Malloc0 00:07:58.285 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.285 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:58.285 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.285 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 [2024-07-12 16:11:41.795609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66442 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66442 /var/tmp/bdevperf.sock 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66442 ']' 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.286 16:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 [2024-07-12 16:11:41.845787] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:07:58.286 [2024-07-12 16:11:41.845886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66442 ] 00:07:58.286 [2024-07-12 16:11:41.982793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.543 [2024-07-12 16:11:42.053013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.543 [2024-07-12 16:11:42.086555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.543 NVMe0n1 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.543 16:11:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.800 Running I/O for 10 seconds... 00:08:08.771 00:08:08.771 Latency(us) 00:08:08.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.771 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:08.771 Verification LBA range: start 0x0 length 0x4000 00:08:08.771 NVMe0n1 : 10.09 8939.27 34.92 0.00 0.00 114109.01 22878.02 90558.84 00:08:08.771 =================================================================================================================== 00:08:08.771 Total : 8939.27 34.92 0.00 0.00 114109.01 22878.02 90558.84 00:08:08.771 0 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66442 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66442 ']' 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66442 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66442 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.771 killing process with pid 66442 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66442' 00:08:08.771 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.771 00:08:08.771 Latency(us) 00:08:08.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.771 =================================================================================================================== 00:08:08.771 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66442 00:08:08.771 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66442 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.030 rmmod nvme_tcp 00:08:09.030 rmmod nvme_fabrics 00:08:09.030 rmmod nvme_keyring 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66418 ']' 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66418 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66418 ']' 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66418 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66418 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:09.030 killing process with pid 66418 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66418' 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66418 00:08:09.030 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66418 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:09.305 00:08:09.305 real 0m11.978s 00:08:09.305 user 0m21.017s 00:08:09.305 sys 0m1.934s 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.305 16:11:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.305 ************************************ 00:08:09.305 END TEST nvmf_queue_depth 00:08:09.305 ************************************ 00:08:09.305 16:11:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:09.305 16:11:52 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:09.305 16:11:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.305 16:11:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.305 16:11:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.305 ************************************ 00:08:09.305 START TEST nvmf_target_multipath 00:08:09.305 ************************************ 00:08:09.305 16:11:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:09.571 * Looking for test storage... 00:08:09.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.571 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:09.572 Cannot find device "nvmf_tgt_br" 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.572 Cannot find device "nvmf_tgt_br2" 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:09.572 Cannot find device "nvmf_tgt_br" 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:09.572 Cannot find device "nvmf_tgt_br2" 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:09.572 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:09.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:08:09.831 00:08:09.831 --- 10.0.0.2 ping statistics --- 00:08:09.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.831 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:09.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:08:09.831 00:08:09.831 --- 10.0.0.3 ping statistics --- 00:08:09.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.831 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:09.831 00:08:09.831 --- 10.0.0.1 ping statistics --- 00:08:09.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.831 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:09.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66752 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66752 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 66752 ']' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.831 16:11:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:09.831 [2024-07-12 16:11:53.446869] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:08:09.831 [2024-07-12 16:11:53.447006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.090 [2024-07-12 16:11:53.581412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.090 [2024-07-12 16:11:53.637244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.090 [2024-07-12 16:11:53.637303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.090 [2024-07-12 16:11:53.637312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.090 [2024-07-12 16:11:53.637319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.090 [2024-07-12 16:11:53.637325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.090 [2024-07-12 16:11:53.637446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.090 [2024-07-12 16:11:53.638336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.090 [2024-07-12 16:11:53.638519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.090 [2024-07-12 16:11:53.638616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.090 [2024-07-12 16:11:53.667713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:11.025 [2024-07-12 16:11:54.667497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.025 16:11:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:11.284 Malloc0 00:08:11.284 16:11:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:11.544 16:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:11.802 16:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.061 [2024-07-12 16:11:55.682974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.061 16:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:12.319 [2024-07-12 16:11:55.899153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:12.319 16:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:12.319 16:11:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:12.578 16:11:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.578 16:11:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:12.578 16:11:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.578 16:11:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:12.578 16:11:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66839 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:14.478 16:11:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:14.736 [global] 00:08:14.736 thread=1 00:08:14.736 invalidate=1 00:08:14.736 rw=randrw 00:08:14.736 time_based=1 00:08:14.736 runtime=6 00:08:14.736 ioengine=libaio 00:08:14.736 direct=1 00:08:14.736 bs=4096 00:08:14.736 iodepth=128 00:08:14.736 norandommap=0 00:08:14.736 numjobs=1 00:08:14.736 00:08:14.736 verify_dump=1 00:08:14.736 verify_backlog=512 00:08:14.736 verify_state_save=0 00:08:14.736 do_verify=1 00:08:14.736 verify=crc32c-intel 00:08:14.736 [job0] 00:08:14.736 filename=/dev/nvme0n1 00:08:14.736 Could not set queue depth (nvme0n1) 00:08:14.736 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:14.736 fio-3.35 00:08:14.736 Starting 1 thread 00:08:15.670 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:15.928 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:16.186 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:16.444 16:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:16.702 16:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66839 00:08:20.886 00:08:20.886 job0: (groupid=0, jobs=1): err= 0: pid=66865: Fri Jul 12 16:12:04 2024 00:08:20.886 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(238MiB/6006msec) 00:08:20.886 slat (usec): min=3, max=6060, avg=58.36, stdev=221.57 00:08:20.886 clat (usec): min=1615, max=15643, avg=8577.04, stdev=1457.96 00:08:20.886 lat (usec): min=1628, max=15673, avg=8635.40, stdev=1461.61 00:08:20.886 clat percentiles (usec): 00:08:20.886 | 1.00th=[ 4359], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 7832], 00:08:20.886 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:08:20.886 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[11994], 00:08:20.886 | 99.00th=[13173], 99.50th=[13698], 99.90th=[14353], 99.95th=[14484], 00:08:20.886 | 99.99th=[15008] 00:08:20.886 bw ( KiB/s): min= 5408, max=26136, per=51.13%, avg=20768.36, stdev=7163.99, samples=11 00:08:20.886 iops : min= 1352, max= 6534, avg=5192.27, stdev=1791.12, samples=11 00:08:20.886 write: IOPS=6058, BW=23.7MiB/s (24.8MB/s)(125MiB/5288msec); 0 zone resets 00:08:20.886 slat (usec): min=4, max=2806, avg=65.90, stdev=160.04 00:08:20.886 clat (usec): min=2444, max=14552, avg=7453.20, stdev=1293.81 00:08:20.886 lat (usec): min=2469, max=14576, avg=7519.10, stdev=1297.82 00:08:20.886 clat percentiles (usec): 00:08:20.886 | 1.00th=[ 3425], 5.00th=[ 4424], 10.00th=[ 6063], 20.00th=[ 6980], 00:08:20.886 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7767], 00:08:20.886 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[ 8848], 00:08:20.886 | 99.00th=[11207], 99.50th=[11863], 99.90th=[13042], 99.95th=[13304], 00:08:20.886 | 99.99th=[14222] 00:08:20.886 bw ( KiB/s): min= 5664, max=25744, per=86.02%, avg=20845.55, stdev=7000.94, samples=11 00:08:20.886 iops : min= 1416, max= 6436, avg=5211.36, stdev=1750.22, samples=11 00:08:20.886 lat (msec) : 2=0.01%, 4=1.40%, 10=91.89%, 20=6.70% 00:08:20.886 cpu : usr=5.66%, sys=22.11%, ctx=5509, majf=0, minf=114 00:08:20.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:20.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:20.886 issued rwts: total=60988,32037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:20.886 00:08:20.886 Run status group 0 (all jobs): 00:08:20.886 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=238MiB (250MB), run=6006-6006msec 00:08:20.886 WRITE: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=125MiB (131MB), run=5288-5288msec 00:08:20.886 00:08:20.886 Disk stats (read/write): 00:08:20.886 nvme0n1: ios=60346/31176, merge=0/0, ticks=497223/218257, in_queue=715480, util=98.61% 00:08:20.886 16:12:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:21.144 16:12:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:21.402 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66939 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:21.403 16:12:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:21.403 [global] 00:08:21.403 thread=1 00:08:21.403 invalidate=1 00:08:21.403 rw=randrw 00:08:21.403 time_based=1 00:08:21.403 runtime=6 00:08:21.403 ioengine=libaio 00:08:21.403 direct=1 00:08:21.403 bs=4096 00:08:21.403 iodepth=128 00:08:21.403 norandommap=0 00:08:21.403 numjobs=1 00:08:21.403 00:08:21.403 verify_dump=1 00:08:21.403 verify_backlog=512 00:08:21.403 verify_state_save=0 00:08:21.403 do_verify=1 00:08:21.403 verify=crc32c-intel 00:08:21.403 [job0] 00:08:21.403 filename=/dev/nvme0n1 00:08:21.403 Could not set queue depth (nvme0n1) 00:08:21.660 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:21.660 fio-3.35 00:08:21.660 Starting 1 thread 00:08:22.594 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:22.855 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:23.112 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:23.112 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:23.112 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:23.112 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:23.112 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:23.113 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:23.370 16:12:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:23.628 16:12:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66939 00:08:27.812 00:08:27.812 job0: (groupid=0, jobs=1): err= 0: pid=66965: Fri Jul 12 16:12:11 2024 00:08:27.812 read: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(271MiB/6006msec) 00:08:27.812 slat (usec): min=4, max=5660, avg=43.46, stdev=185.94 00:08:27.812 clat (usec): min=291, max=17548, avg=7625.82, stdev=2152.57 00:08:27.812 lat (usec): min=301, max=17558, avg=7669.28, stdev=2166.45 00:08:27.812 clat percentiles (usec): 00:08:27.812 | 1.00th=[ 2442], 5.00th=[ 3458], 10.00th=[ 4359], 20.00th=[ 5997], 00:08:27.812 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8225], 00:08:27.812 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11338], 00:08:27.812 | 99.00th=[13173], 99.50th=[13960], 99.90th=[15270], 99.95th=[15401], 00:08:27.812 | 99.99th=[16450] 00:08:27.812 bw ( KiB/s): min=14120, max=35600, per=51.87%, avg=23978.91, stdev=6873.16, samples=11 00:08:27.812 iops : min= 3530, max= 8900, avg=5994.73, stdev=1718.29, samples=11 00:08:27.812 write: IOPS=6621, BW=25.9MiB/s (27.1MB/s)(141MiB/5443msec); 0 zone resets 00:08:27.812 slat (usec): min=16, max=2967, avg=54.87, stdev=132.30 00:08:27.812 clat (usec): min=798, max=15329, avg=6469.73, stdev=1876.78 00:08:27.812 lat (usec): min=830, max=15361, avg=6524.60, stdev=1890.73 00:08:27.812 clat percentiles (usec): 00:08:27.812 | 1.00th=[ 2311], 5.00th=[ 3130], 10.00th=[ 3621], 20.00th=[ 4424], 00:08:27.812 | 30.00th=[ 5473], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7373], 00:08:27.812 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8291], 95.00th=[ 8717], 00:08:27.812 | 99.00th=[10683], 99.50th=[11469], 99.90th=[12911], 99.95th=[13435], 00:08:27.812 | 99.99th=[14091] 00:08:27.812 bw ( KiB/s): min=14632, max=36376, per=90.59%, avg=23993.45, stdev=6684.73, samples=11 00:08:27.812 iops : min= 3658, max= 9094, avg=5998.36, stdev=1671.18, samples=11 00:08:27.812 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.04% 00:08:27.812 lat (msec) : 2=0.45%, 4=9.43%, 10=84.50%, 20=5.53% 00:08:27.812 cpu : usr=6.39%, sys=24.35%, ctx=6184, majf=0, minf=96 00:08:27.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.812 issued rwts: total=69410,36040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.812 00:08:27.812 Run status group 0 (all jobs): 00:08:27.812 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=271MiB (284MB), run=6006-6006msec 00:08:27.812 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=141MiB (148MB), run=5443-5443msec 00:08:27.812 00:08:27.812 Disk stats (read/write): 00:08:27.812 nvme0n1: ios=68449/35430, merge=0/0, ticks=497551/212475, in_queue=710026, util=98.70% 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:27.812 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.070 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.070 rmmod nvme_tcp 00:08:28.328 rmmod nvme_fabrics 00:08:28.328 rmmod nvme_keyring 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66752 ']' 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66752 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 66752 ']' 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 66752 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66752 00:08:28.328 killing process with pid 66752 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66752' 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 66752 00:08:28.328 16:12:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 66752 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.328 16:12:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.587 16:12:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:28.587 00:08:28.587 real 0m19.121s 00:08:28.587 user 1m11.682s 00:08:28.587 sys 0m10.061s 00:08:28.587 16:12:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.587 16:12:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:28.587 ************************************ 00:08:28.587 END TEST nvmf_target_multipath 00:08:28.587 ************************************ 00:08:28.587 16:12:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:28.587 16:12:12 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:28.587 16:12:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.587 16:12:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.587 16:12:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.587 ************************************ 00:08:28.587 START TEST nvmf_zcopy 00:08:28.587 ************************************ 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:28.587 * Looking for test storage... 00:08:28.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:28.587 16:12:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:28.588 Cannot find device "nvmf_tgt_br" 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.588 Cannot find device "nvmf_tgt_br2" 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:28.588 Cannot find device "nvmf_tgt_br" 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:28.588 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:28.846 Cannot find device "nvmf_tgt_br2" 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:28.846 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:29.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:08:29.104 00:08:29.104 --- 10.0.0.2 ping statistics --- 00:08:29.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.104 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:29.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:29.104 00:08:29.104 --- 10.0.0.3 ping statistics --- 00:08:29.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.104 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:29.104 00:08:29.104 --- 10.0.0.1 ping statistics --- 00:08:29.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.104 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67213 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67213 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67213 ']' 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.104 16:12:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.104 [2024-07-12 16:12:12.709622] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:08:29.104 [2024-07-12 16:12:12.709706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.362 [2024-07-12 16:12:12.851471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.362 [2024-07-12 16:12:12.921674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.362 [2024-07-12 16:12:12.921745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.362 [2024-07-12 16:12:12.921770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.362 [2024-07-12 16:12:12.921781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.362 [2024-07-12 16:12:12.921789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.362 [2024-07-12 16:12:12.921816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.362 [2024-07-12 16:12:12.954495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 [2024-07-12 16:12:13.808607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 [2024-07-12 16:12:13.828625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 malloc0 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:30.320 { 00:08:30.320 "params": { 00:08:30.320 "name": "Nvme$subsystem", 00:08:30.320 "trtype": "$TEST_TRANSPORT", 00:08:30.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.320 "adrfam": "ipv4", 00:08:30.320 "trsvcid": "$NVMF_PORT", 00:08:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.320 "hdgst": ${hdgst:-false}, 00:08:30.320 "ddgst": ${ddgst:-false} 00:08:30.320 }, 00:08:30.320 "method": "bdev_nvme_attach_controller" 00:08:30.320 } 00:08:30.320 EOF 00:08:30.320 )") 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:30.320 16:12:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:30.320 "params": { 00:08:30.320 "name": "Nvme1", 00:08:30.320 "trtype": "tcp", 00:08:30.320 "traddr": "10.0.0.2", 00:08:30.320 "adrfam": "ipv4", 00:08:30.320 "trsvcid": "4420", 00:08:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.320 "hdgst": false, 00:08:30.320 "ddgst": false 00:08:30.320 }, 00:08:30.320 "method": "bdev_nvme_attach_controller" 00:08:30.320 }' 00:08:30.320 [2024-07-12 16:12:13.912382] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:08:30.320 [2024-07-12 16:12:13.912517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67251 ] 00:08:30.577 [2024-07-12 16:12:14.053013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.577 [2024-07-12 16:12:14.124901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.577 [2024-07-12 16:12:14.166517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.578 Running I/O for 10 seconds... 00:08:42.771 00:08:42.771 Latency(us) 00:08:42.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:42.772 Verification LBA range: start 0x0 length 0x1000 00:08:42.772 Nvme1n1 : 10.02 5994.70 46.83 0.00 0.00 21280.93 2249.08 34793.66 00:08:42.772 =================================================================================================================== 00:08:42.772 Total : 5994.70 46.83 0.00 0.00 21280.93 2249.08 34793.66 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67362 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.772 { 00:08:42.772 "params": { 00:08:42.772 "name": "Nvme$subsystem", 00:08:42.772 "trtype": "$TEST_TRANSPORT", 00:08:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.772 "adrfam": "ipv4", 00:08:42.772 "trsvcid": "$NVMF_PORT", 00:08:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.772 "hdgst": ${hdgst:-false}, 00:08:42.772 "ddgst": ${ddgst:-false} 00:08:42.772 }, 00:08:42.772 "method": "bdev_nvme_attach_controller" 00:08:42.772 } 00:08:42.772 EOF 00:08:42.772 )") 00:08:42.772 [2024-07-12 16:12:24.465381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.465425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:42.772 16:12:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.772 "params": { 00:08:42.772 "name": "Nvme1", 00:08:42.772 "trtype": "tcp", 00:08:42.772 "traddr": "10.0.0.2", 00:08:42.772 "adrfam": "ipv4", 00:08:42.772 "trsvcid": "4420", 00:08:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.772 "hdgst": false, 00:08:42.772 "ddgst": false 00:08:42.772 }, 00:08:42.772 "method": "bdev_nvme_attach_controller" 00:08:42.772 }' 00:08:42.772 [2024-07-12 16:12:24.477362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.477392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.489373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.489420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.501362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.501390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.513365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.513393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.520328] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:08:42.772 [2024-07-12 16:12:24.520420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67362 ] 00:08:42.772 [2024-07-12 16:12:24.525364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.525389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.537379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.537404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.549365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.549389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.561371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.561397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.573375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.573400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.585381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.585425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.597386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.597414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.609387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.609414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.621394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.621421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.633395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.633421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.645403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.645430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.657406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.657433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.663043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.772 [2024-07-12 16:12:24.669449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.669488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.677425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.677457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.685420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.685469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.693423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.693451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.705440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.705481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.713429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.713456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.721429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.721454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.724007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.772 [2024-07-12 16:12:24.729419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.729442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.737446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.737477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.749479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.749517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.757458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.757496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.762674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.772 [2024-07-12 16:12:24.765450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.765477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.773461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.773498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.781441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.781467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.789445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.789470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.797466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.797497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.805464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.805493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.813467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.813495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.821509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.821538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.829504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.829533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.837501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.837531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.845499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.772 [2024-07-12 16:12:24.845527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.772 [2024-07-12 16:12:24.853682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.853720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.861596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.861632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 Running I/O for 5 seconds... 00:08:42.773 [2024-07-12 16:12:24.869611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.869637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.883774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.883809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.894372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.894406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.905598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.905631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.916985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.917017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.934478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.934526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.951733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.951769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.968603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.968639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.978622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.978672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:24.990483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:24.990517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.001110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.001145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.011801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.011834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.023150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.023184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.035278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.035326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.051391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.051426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.068827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.068873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.078894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.078926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.090361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.090424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.101130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.101164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.111828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.111873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.122128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.122161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.132950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.132983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.150173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.150241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.167402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.167463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.182488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.182523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.199404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.199467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.214359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.214393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.224365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.224414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.237909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.237940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.248585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.248633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.263491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.263523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.280303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.280337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.290333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.290367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.305031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.305065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.315893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.315926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.330348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.330384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.340476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.340509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.351883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.351915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.367923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.367959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.384194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.384230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.394153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.394186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.405618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.405650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.416790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.416823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.428836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.428884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.444728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.444773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.461889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.461924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.472205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.472237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.486860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.486909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.496821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.496853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.511757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.511792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.521201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.521234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.534241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.534276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.545008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.545043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.556257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.556290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.570661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.570696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.773 [2024-07-12 16:12:25.588402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.773 [2024-07-12 16:12:25.588438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.598778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.598811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.613127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.613163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.623885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.623917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.638709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.638743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.654707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.654744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.664471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.664505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.676240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.676274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.688344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.688377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.703709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.703747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.721227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.721266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.731503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.731536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.742818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.742852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.753501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.753534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.764498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.764532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.779222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.779254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.796853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.796916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.807521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.807553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.818564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.818596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.829267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.829300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.844103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.844139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.861537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.861574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.871539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.871573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.886041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.886076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.896510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.896541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.911019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.911052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.921051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.921083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.932818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.932852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.948747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.948796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.965968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.966001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:25.982091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:25.982126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.000256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.000291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.014048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.014081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.030033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.030067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.048731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.048767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.063754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.063821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.074076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.074110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.089633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.089669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.105749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.105785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.115576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.115608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.130508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.130543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.139976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.140008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.155510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.155543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.165214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.165246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.180558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.180593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.190359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.190392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.205941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.205975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.216008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.216041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.231077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.231112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.247422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.247469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.265379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.265417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.280683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.280732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.290459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.290492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.302580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.302614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.318333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.318382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.335378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.335411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.345695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.345731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.774 [2024-07-12 16:12:26.357288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.774 [2024-07-12 16:12:26.357322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.368189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.368223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.380561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.380598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.396031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.396066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.414990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.415029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.429237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.429272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.444504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.444539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.453737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.453772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.469810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.469845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.482132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.482166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.775 [2024-07-12 16:12:26.491242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.775 [2024-07-12 16:12:26.491275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.503181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.503215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.514220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.514253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.529190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.529223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.539031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.539063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.554347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.554382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.564995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.565027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.580306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.580342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.596592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.596628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.606084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.606117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.617989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.618024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.628820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.628854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.639671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.639705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.650411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.033 [2024-07-12 16:12:26.650444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.033 [2024-07-12 16:12:26.661214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.661247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.673116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.673149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.682396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.682429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.694424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.694458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.705783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.705816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.716943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.716976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.727371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.727406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.737983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.738017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.748795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.748832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.034 [2024-07-12 16:12:26.759717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.034 [2024-07-12 16:12:26.759752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.771944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.771976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.787362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.787397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.805269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.805321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.815545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.815578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.830281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.830314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.848701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.848737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.863206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.292 [2024-07-12 16:12:26.863244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.292 [2024-07-12 16:12:26.872364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.872399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.888031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.888066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.903857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.903903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.913767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.913799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.925513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.925546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.938104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.938136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.947880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.947913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.961105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.961139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.971777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.971812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:26.986653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:26.986688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.293 [2024-07-12 16:12:27.003884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.293 [2024-07-12 16:12:27.003917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.019685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.019719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.028975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.029024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.044943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.044977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.054899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.054947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.069571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.069606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.085735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.085771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.095484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.095519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.108692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.108726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.119704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.119738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.136486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.136521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.153646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.153679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.170468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.170519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.186283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.186319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.195922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.195964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.212203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.212239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.228508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.228544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.238268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.238301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.249662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.249695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.261840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.261883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.552 [2024-07-12 16:12:27.271431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.552 [2024-07-12 16:12:27.271472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.283003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.283036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.293887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.293931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.312556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.312591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.327140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.327174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.336611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.336643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.348164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.348198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.361247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.361281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.371542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.371575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.385995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.386030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.395743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.395777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.411758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.411794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.428409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.428444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.437865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.437942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.449963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.449997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.460937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.460979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.471754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.471786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.488726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.488760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.505892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.505927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.515711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.515744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.811 [2024-07-12 16:12:27.527066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.811 [2024-07-12 16:12:27.527099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.537660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.537694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.548318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.548352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.559655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.559688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.570900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.570960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.582032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.582066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.592794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.592828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.606073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.606106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.623844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.623888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.638107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.638144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.647734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.647766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.662900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.662950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.673219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.673252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.688252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.688287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.705197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.705235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.714885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.714917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.730220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.730298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.747265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.747304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.757056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.757090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.768749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.768783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.779584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.779620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.070 [2024-07-12 16:12:27.792349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.070 [2024-07-12 16:12:27.792384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.810853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.810900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.825445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.825479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.835506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.835538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.847196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.847230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.858135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.858170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.874995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.875034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.884572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.884606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.896110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.896144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.908351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.908383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.917574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.917607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.934233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.934270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.944506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.944541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.959584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.959620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.969857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.969903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:27.984230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:27.984267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:28.000011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:28.000040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:28.009586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:28.009620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:28.025431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:28.025467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:28.035394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:28.035452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.329 [2024-07-12 16:12:28.050090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.329 [2024-07-12 16:12:28.050123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.588 [2024-07-12 16:12:28.066287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.588 [2024-07-12 16:12:28.066326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.588 [2024-07-12 16:12:28.076538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.588 [2024-07-12 16:12:28.076572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.588 [2024-07-12 16:12:28.088168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.588 [2024-07-12 16:12:28.088204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.588 [2024-07-12 16:12:28.098809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.588 [2024-07-12 16:12:28.098858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.588 [2024-07-12 16:12:28.109553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.588 [2024-07-12 16:12:28.109586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.120469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.120520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.131428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.131469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.143877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.143906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.153779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.153814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.165185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.165218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.180172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.180208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.195990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.196023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.205649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.205681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.217897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.217941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.229014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.229045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.244634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.244666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.255285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.255318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.270290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.270324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.287144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.287182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.296613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.296648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.589 [2024-07-12 16:12:28.312301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.589 [2024-07-12 16:12:28.312337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.323020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.323056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.337642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.337676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.356129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.356165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.366833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.366877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.378059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.378091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.389399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.389433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.405441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.405490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.421165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.421201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.430581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.430615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.447182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.447216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.456881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.456913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.468147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.468180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.480195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.480228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.489832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.489877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.504431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.504466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.514615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.514648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.529792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.529842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.547619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.547655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.558303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.558337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.848 [2024-07-12 16:12:28.573033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.848 [2024-07-12 16:12:28.573067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.582856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.582899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.598053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.598088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.614414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.614451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.623632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.623668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.638623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.638661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.656096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.656132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.666557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.666592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.677975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.678010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.688454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.688488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.702659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.702693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.720031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.720065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.736698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.736735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.753264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.753298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.762936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.762969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.774701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.774737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.785455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.785493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.796553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.796589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.813131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.813170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.106 [2024-07-12 16:12:28.822905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.106 [2024-07-12 16:12:28.822940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.839261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.839298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.849108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.849142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.860548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.860581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.871930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.871973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.882630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.882663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.897797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.897832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.915485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.915520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.926011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.926044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.937150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.937183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.948010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.948043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.962920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.962958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.972540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.972576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:28.988262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:28.988299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.005246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.005315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.014469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.014503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.030417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.030458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.041297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.041330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.056099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.056133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.072826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.072877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.364 [2024-07-12 16:12:29.082776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.364 [2024-07-12 16:12:29.082809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.621 [2024-07-12 16:12:29.098110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.621 [2024-07-12 16:12:29.098145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.621 [2024-07-12 16:12:29.108126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.621 [2024-07-12 16:12:29.108159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.621 [2024-07-12 16:12:29.123378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.123411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.141403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.141439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.156128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.156165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.165319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.165368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.176877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.176920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.187538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.187572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.198554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.198588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.209503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.209535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.224532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.224567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.241843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.241895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.251739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.251772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.263195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.263229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.273986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.274019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.286673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.286709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.296329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.296363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.312191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.312231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.328586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.328624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.622 [2024-07-12 16:12:29.338032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.622 [2024-07-12 16:12:29.338065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.353469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.353502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.370810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.370844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.381018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.381050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.395761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.395808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.413430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.413463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.430169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.430201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.440209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.440242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.454391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.454423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.464106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.464152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.475520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.475553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.486885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.486943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.497954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.497985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.509317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.509364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.524314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.524393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.534579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.534645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.548845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.548924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.558708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.558769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.573324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.573416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.589709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.589782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.880 [2024-07-12 16:12:29.606596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.880 [2024-07-12 16:12:29.606665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.616472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.616534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.631062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.631132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.648198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.648285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.664580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.664651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.681272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.681331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.691381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.691436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.707319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.707393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.723645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.723719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.733531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.733592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.748475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.748546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.759584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.759639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.768329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.768389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.783079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.783149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.792061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.792108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.807033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.807081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.823108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.148 [2024-07-12 16:12:29.823142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.148 [2024-07-12 16:12:29.833114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.149 [2024-07-12 16:12:29.833145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.149 [2024-07-12 16:12:29.848117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.149 [2024-07-12 16:12:29.848149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.149 [2024-07-12 16:12:29.858775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.149 [2024-07-12 16:12:29.858807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.426 [2024-07-12 16:12:29.870321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.426 [2024-07-12 16:12:29.870353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.426 00:08:46.426 Latency(us) 00:08:46.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.426 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:46.426 Nvme1n1 : 5.01 11605.13 90.67 0.00 0.00 11014.65 4587.52 23116.33 00:08:46.426 =================================================================================================================== 00:08:46.426 Total : 11605.13 90.67 0.00 0.00 11014.65 4587.52 23116.33 00:08:46.426 [2024-07-12 16:12:29.881377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.426 [2024-07-12 16:12:29.881407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.426 [2024-07-12 16:12:29.893388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.893419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.901406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.901436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.913422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.913460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.925455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.925496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.937480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.937578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.949446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.949499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.961448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.961501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.973481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.973507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.985462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.985507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:29.997501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:29.997552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:30.009459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:30.009511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:30.017456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:30.017503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:30.029476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:30.029532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:30.037448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:30.037472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 [2024-07-12 16:12:30.049447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.427 [2024-07-12 16:12:30.049469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.427 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67362) - No such process 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67362 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 delay0 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.427 16:12:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:46.686 [2024-07-12 16:12:30.234183] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:53.248 Initializing NVMe Controllers 00:08:53.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:53.248 Initialization complete. Launching workers. 00:08:53.248 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 90 00:08:53.248 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 377, failed to submit 33 00:08:53.248 success 245, unsuccess 132, failed 0 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.248 rmmod nvme_tcp 00:08:53.248 rmmod nvme_fabrics 00:08:53.248 rmmod nvme_keyring 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67213 ']' 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67213 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67213 ']' 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67213 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67213 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:53.248 killing process with pid 67213 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67213' 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67213 00:08:53.248 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67213 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.249 00:08:53.249 real 0m24.476s 00:08:53.249 user 0m40.323s 00:08:53.249 sys 0m6.472s 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.249 16:12:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.249 ************************************ 00:08:53.249 END TEST nvmf_zcopy 00:08:53.249 ************************************ 00:08:53.249 16:12:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.249 16:12:36 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:53.249 16:12:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.249 16:12:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.249 16:12:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.249 ************************************ 00:08:53.249 START TEST nvmf_nmic 00:08:53.249 ************************************ 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:53.249 * Looking for test storage... 00:08:53.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.249 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:53.250 Cannot find device "nvmf_tgt_br" 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.250 Cannot find device "nvmf_tgt_br2" 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:53.250 Cannot find device "nvmf_tgt_br" 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:53.250 Cannot find device "nvmf_tgt_br2" 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.250 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.508 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.508 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.508 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:53.508 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:53.508 16:12:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:53.508 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:53.508 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:53.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:53.509 00:08:53.509 --- 10.0.0.2 ping statistics --- 00:08:53.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.509 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:53.509 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.509 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:53.509 00:08:53.509 --- 10.0.0.3 ping statistics --- 00:08:53.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.509 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:53.509 00:08:53.509 --- 10.0.0.1 ping statistics --- 00:08:53.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.509 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67692 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67692 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 67692 ']' 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.509 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 [2024-07-12 16:12:37.192091] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:08:53.509 [2024-07-12 16:12:37.192197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.768 [2024-07-12 16:12:37.332403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.768 [2024-07-12 16:12:37.397091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.768 [2024-07-12 16:12:37.397145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.768 [2024-07-12 16:12:37.397157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.768 [2024-07-12 16:12:37.397165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.768 [2024-07-12 16:12:37.397173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.768 [2024-07-12 16:12:37.397292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.768 [2024-07-12 16:12:37.398054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.768 [2024-07-12 16:12:37.398113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.768 [2024-07-12 16:12:37.398117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.768 [2024-07-12 16:12:37.429849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.768 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.768 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:08:53.768 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.768 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.768 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 [2024-07-12 16:12:37.529841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 Malloc0 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 [2024-07-12 16:12:37.592364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 test case1: single bdev can't be used in multiple subsystems 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.027 [2024-07-12 16:12:37.616231] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:54.027 [2024-07-12 16:12:37.616306] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:54.027 [2024-07-12 16:12:37.616319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.027 request: 00:08:54.027 { 00:08:54.027 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:54.027 "namespace": { 00:08:54.027 "bdev_name": "Malloc0", 00:08:54.027 "no_auto_visible": false 00:08:54.027 }, 00:08:54.027 "method": "nvmf_subsystem_add_ns", 00:08:54.027 "req_id": 1 00:08:54.027 } 00:08:54.027 Got JSON-RPC error response 00:08:54.027 response: 00:08:54.027 { 00:08:54.027 "code": -32602, 00:08:54.027 "message": "Invalid parameters" 00:08:54.027 } 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:54.027 Adding namespace failed - expected result. 00:08:54.027 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:54.027 test case2: host connect to nvmf target in multiple paths 00:08:54.028 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:54.028 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:54.028 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.028 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.028 [2024-07-12 16:12:37.628347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:54.028 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.028 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.287 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:54.287 16:12:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.287 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.287 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.287 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:54.287 16:12:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:56.191 16:12:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:56.450 [global] 00:08:56.450 thread=1 00:08:56.450 invalidate=1 00:08:56.450 rw=write 00:08:56.450 time_based=1 00:08:56.450 runtime=1 00:08:56.450 ioengine=libaio 00:08:56.450 direct=1 00:08:56.450 bs=4096 00:08:56.450 iodepth=1 00:08:56.450 norandommap=0 00:08:56.450 numjobs=1 00:08:56.450 00:08:56.450 verify_dump=1 00:08:56.450 verify_backlog=512 00:08:56.450 verify_state_save=0 00:08:56.450 do_verify=1 00:08:56.450 verify=crc32c-intel 00:08:56.450 [job0] 00:08:56.450 filename=/dev/nvme0n1 00:08:56.450 Could not set queue depth (nvme0n1) 00:08:56.450 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.450 fio-3.35 00:08:56.450 Starting 1 thread 00:08:57.828 00:08:57.828 job0: (groupid=0, jobs=1): err= 0: pid=67776: Fri Jul 12 16:12:41 2024 00:08:57.828 read: IOPS=2994, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec) 00:08:57.828 slat (nsec): min=12113, max=76431, avg=16278.76, stdev=5404.36 00:08:57.828 clat (usec): min=129, max=266, avg=178.85, stdev=23.46 00:08:57.828 lat (usec): min=142, max=283, avg=195.13, stdev=24.46 00:08:57.828 clat percentiles (usec): 00:08:57.828 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 159], 00:08:57.828 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:08:57.828 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 223], 00:08:57.828 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 265], 99.95th=[ 265], 00:08:57.828 | 99.99th=[ 269] 00:08:57.828 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:57.828 slat (usec): min=15, max=105, avg=23.69, stdev= 7.23 00:08:57.828 clat (usec): min=76, max=268, avg=107.64, stdev=19.41 00:08:57.828 lat (usec): min=94, max=294, avg=131.34, stdev=21.73 00:08:57.828 clat percentiles (usec): 00:08:57.828 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 92], 00:08:57.828 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 108], 00:08:57.828 | 70.00th=[ 114], 80.00th=[ 123], 90.00th=[ 135], 95.00th=[ 145], 00:08:57.828 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 204], 99.95th=[ 219], 00:08:57.828 | 99.99th=[ 269] 00:08:57.828 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:08:57.828 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:57.828 lat (usec) : 100=21.70%, 250=78.17%, 500=0.13% 00:08:57.828 cpu : usr=2.10%, sys=9.80%, ctx=6069, majf=0, minf=2 00:08:57.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.828 issued rwts: total=2997,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.828 00:08:57.828 Run status group 0 (all jobs): 00:08:57.828 READ: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:08:57.828 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:08:57.828 00:08:57.828 Disk stats (read/write): 00:08:57.828 nvme0n1: ios=2610/2953, merge=0/0, ticks=517/382, in_queue=899, util=91.28% 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.828 rmmod nvme_tcp 00:08:57.828 rmmod nvme_fabrics 00:08:57.828 rmmod nvme_keyring 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67692 ']' 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67692 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 67692 ']' 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 67692 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67692 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67692' 00:08:57.828 killing process with pid 67692 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 67692 00:08:57.828 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 67692 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:58.087 00:08:58.087 real 0m4.939s 00:08:58.087 user 0m15.223s 00:08:58.087 sys 0m2.238s 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.087 16:12:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.087 ************************************ 00:08:58.087 END TEST nvmf_nmic 00:08:58.087 ************************************ 00:08:58.087 16:12:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.087 16:12:41 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:58.087 16:12:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.087 16:12:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.087 16:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.087 ************************************ 00:08:58.087 START TEST nvmf_fio_target 00:08:58.087 ************************************ 00:08:58.087 16:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:58.087 * Looking for test storage... 00:08:58.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.087 16:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.087 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:58.088 Cannot find device "nvmf_tgt_br" 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:08:58.088 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.088 Cannot find device "nvmf_tgt_br2" 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:58.347 Cannot find device "nvmf_tgt_br" 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:58.347 Cannot find device "nvmf_tgt_br2" 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.347 16:12:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:58.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:08:58.347 00:08:58.347 --- 10.0.0.2 ping statistics --- 00:08:58.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.347 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:58.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:58.347 00:08:58.347 --- 10.0.0.3 ping statistics --- 00:08:58.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.347 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:58.347 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:58.347 00:08:58.347 --- 10.0.0.1 ping statistics --- 00:08:58.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.347 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.348 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67954 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67954 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 67954 ']' 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.607 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.607 [2024-07-12 16:12:42.144709] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:08:58.607 [2024-07-12 16:12:42.144775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.607 [2024-07-12 16:12:42.281392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.866 [2024-07-12 16:12:42.336918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.866 [2024-07-12 16:12:42.336990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.866 [2024-07-12 16:12:42.337018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.866 [2024-07-12 16:12:42.337025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.866 [2024-07-12 16:12:42.337046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.866 [2024-07-12 16:12:42.337181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.866 [2024-07-12 16:12:42.337803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.866 [2024-07-12 16:12:42.337973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.866 [2024-07-12 16:12:42.337978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.866 [2024-07-12 16:12:42.368152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.866 16:12:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.125 [2024-07-12 16:12:42.711673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.125 16:12:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.383 16:12:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:59.383 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.642 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:59.642 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.901 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:59.901 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.161 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:00.161 16:12:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:00.420 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.679 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:00.679 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.938 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:00.938 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.506 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:01.506 16:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:01.506 16:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.073 16:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:02.073 16:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.073 16:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:02.073 16:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.332 16:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.591 [2024-07-12 16:12:46.275802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.591 16:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:02.849 16:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:03.108 16:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.368 16:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:03.368 16:12:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.368 16:12:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.368 16:12:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:03.368 16:12:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:03.368 16:12:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:05.293 16:12:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:05.294 [global] 00:09:05.294 thread=1 00:09:05.294 invalidate=1 00:09:05.294 rw=write 00:09:05.294 time_based=1 00:09:05.294 runtime=1 00:09:05.294 ioengine=libaio 00:09:05.294 direct=1 00:09:05.294 bs=4096 00:09:05.294 iodepth=1 00:09:05.294 norandommap=0 00:09:05.294 numjobs=1 00:09:05.294 00:09:05.294 verify_dump=1 00:09:05.294 verify_backlog=512 00:09:05.294 verify_state_save=0 00:09:05.294 do_verify=1 00:09:05.294 verify=crc32c-intel 00:09:05.294 [job0] 00:09:05.294 filename=/dev/nvme0n1 00:09:05.294 [job1] 00:09:05.294 filename=/dev/nvme0n2 00:09:05.294 [job2] 00:09:05.294 filename=/dev/nvme0n3 00:09:05.294 [job3] 00:09:05.294 filename=/dev/nvme0n4 00:09:05.294 Could not set queue depth (nvme0n1) 00:09:05.294 Could not set queue depth (nvme0n2) 00:09:05.294 Could not set queue depth (nvme0n3) 00:09:05.294 Could not set queue depth (nvme0n4) 00:09:05.552 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.552 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.552 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.552 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.552 fio-3.35 00:09:05.552 Starting 4 threads 00:09:06.930 00:09:06.930 job0: (groupid=0, jobs=1): err= 0: pid=68126: Fri Jul 12 16:12:50 2024 00:09:06.930 read: IOPS=1475, BW=5902KiB/s (6044kB/s)(5908KiB/1001msec) 00:09:06.930 slat (nsec): min=16002, max=79569, avg=24045.78, stdev=9865.82 00:09:06.930 clat (usec): min=145, max=1121, avg=360.39, stdev=127.01 00:09:06.930 lat (usec): min=167, max=1163, avg=384.43, stdev=133.80 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 194], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 253], 00:09:06.930 | 30.00th=[ 269], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 347], 00:09:06.930 | 70.00th=[ 371], 80.00th=[ 457], 90.00th=[ 553], 95.00th=[ 644], 00:09:06.930 | 99.00th=[ 701], 99.50th=[ 725], 99.90th=[ 1012], 99.95th=[ 1123], 00:09:06.930 | 99.99th=[ 1123] 00:09:06.930 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:06.930 slat (usec): min=18, max=110, avg=32.66, stdev=11.78 00:09:06.930 clat (usec): min=93, max=1864, avg=242.94, stdev=104.56 00:09:06.930 lat (usec): min=120, max=1892, avg=275.60, stdev=111.70 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 109], 5.00th=[ 126], 10.00th=[ 167], 20.00th=[ 178], 00:09:06.930 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 221], 00:09:06.930 | 70.00th=[ 269], 80.00th=[ 306], 90.00th=[ 412], 95.00th=[ 457], 00:09:06.930 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 701], 99.95th=[ 1860], 00:09:06.930 | 99.99th=[ 1860] 00:09:06.930 bw ( KiB/s): min= 8192, max= 8192, per=25.33%, avg=8192.00, stdev= 0.00, samples=1 00:09:06.930 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:06.930 lat (usec) : 100=0.13%, 250=41.98%, 500=49.52%, 750=8.23%, 1000=0.03% 00:09:06.930 lat (msec) : 2=0.10% 00:09:06.930 cpu : usr=1.80%, sys=6.90%, ctx=3013, majf=0, minf=11 00:09:06.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 issued rwts: total=1477,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.930 job1: (groupid=0, jobs=1): err= 0: pid=68127: Fri Jul 12 16:12:50 2024 00:09:06.930 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:06.930 slat (nsec): min=13442, max=61078, avg=17721.82, stdev=5696.24 00:09:06.930 clat (usec): min=150, max=706, avg=322.85, stdev=71.54 00:09:06.930 lat (usec): min=164, max=727, avg=340.57, stdev=71.81 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 174], 5.00th=[ 227], 10.00th=[ 239], 20.00th=[ 258], 00:09:06.930 | 30.00th=[ 281], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 338], 00:09:06.930 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 445], 00:09:06.930 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 619], 99.95th=[ 709], 00:09:06.930 | 99.99th=[ 709] 00:09:06.930 write: IOPS=1954, BW=7816KiB/s (8004kB/s)(7824KiB/1001msec); 0 zone resets 00:09:06.930 slat (usec): min=12, max=125, avg=24.33, stdev= 8.51 00:09:06.930 clat (usec): min=89, max=7246, avg=215.65, stdev=279.09 00:09:06.930 lat (usec): min=109, max=7268, avg=239.98, stdev=279.29 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 98], 5.00th=[ 111], 10.00th=[ 125], 20.00th=[ 169], 00:09:06.930 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 210], 00:09:06.930 | 70.00th=[ 225], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 285], 00:09:06.930 | 99.00th=[ 314], 99.50th=[ 537], 99.90th=[ 6456], 99.95th=[ 7242], 00:09:06.930 | 99.99th=[ 7242] 00:09:06.930 bw ( KiB/s): min= 8192, max= 8192, per=25.33%, avg=8192.00, stdev= 0.00, samples=1 00:09:06.930 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:06.930 lat (usec) : 100=0.77%, 250=51.98%, 500=45.73%, 750=1.35% 00:09:06.930 lat (msec) : 2=0.03%, 4=0.06%, 10=0.09% 00:09:06.930 cpu : usr=1.80%, sys=6.00%, ctx=3492, majf=0, minf=9 00:09:06.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 issued rwts: total=1536,1956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.930 job2: (groupid=0, jobs=1): err= 0: pid=68128: Fri Jul 12 16:12:50 2024 00:09:06.930 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:06.930 slat (nsec): min=9224, max=47063, avg=14084.55, stdev=5002.91 00:09:06.930 clat (usec): min=216, max=776, avg=323.50, stdev=68.22 00:09:06.930 lat (usec): min=230, max=796, avg=337.59, stdev=67.11 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 255], 00:09:06.930 | 30.00th=[ 269], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:09:06.930 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 437], 00:09:06.930 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 717], 99.95th=[ 775], 00:09:06.930 | 99.99th=[ 775] 00:09:06.930 write: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec); 0 zone resets 00:09:06.930 slat (usec): min=13, max=108, avg=23.04, stdev= 6.96 00:09:06.930 clat (usec): min=103, max=1645, avg=209.52, stdev=55.46 00:09:06.930 lat (usec): min=124, max=1661, avg=232.57, stdev=55.21 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 111], 5.00th=[ 141], 10.00th=[ 169], 20.00th=[ 180], 00:09:06.930 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 210], 00:09:06.930 | 70.00th=[ 225], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 285], 00:09:06.930 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 553], 99.95th=[ 603], 00:09:06.930 | 99.99th=[ 1647] 00:09:06.930 bw ( KiB/s): min= 8136, max= 8192, per=25.24%, avg=8164.00, stdev=39.60, samples=2 00:09:06.930 iops : min= 2034, max= 2048, avg=2041.00, stdev= 9.90, samples=2 00:09:06.930 lat (usec) : 250=53.61%, 500=45.39%, 750=0.95%, 1000=0.03% 00:09:06.930 lat (msec) : 2=0.03% 00:09:06.930 cpu : usr=1.80%, sys=5.40%, ctx=3578, majf=0, minf=7 00:09:06.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 issued rwts: total=1536,2042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.930 job3: (groupid=0, jobs=1): err= 0: pid=68129: Fri Jul 12 16:12:50 2024 00:09:06.930 read: IOPS=2525, BW=9.86MiB/s (10.3MB/s)(9.88MiB/1001msec) 00:09:06.930 slat (nsec): min=12495, max=59767, avg=17126.13, stdev=6764.34 00:09:06.930 clat (usec): min=136, max=546, avg=196.43, stdev=58.16 00:09:06.930 lat (usec): min=149, max=565, avg=213.56, stdev=62.56 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:06.930 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:09:06.930 | 70.00th=[ 194], 80.00th=[ 227], 90.00th=[ 269], 95.00th=[ 330], 00:09:06.930 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 486], 99.95th=[ 490], 00:09:06.930 | 99.99th=[ 545] 00:09:06.930 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:06.930 slat (usec): min=14, max=125, avg=23.43, stdev= 7.50 00:09:06.930 clat (usec): min=94, max=418, avg=152.11, stdev=55.15 00:09:06.930 lat (usec): min=112, max=450, avg=175.54, stdev=59.37 00:09:06.930 clat percentiles (usec): 00:09:06.930 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 116], 00:09:06.930 | 30.00th=[ 121], 40.00th=[ 127], 50.00th=[ 133], 60.00th=[ 141], 00:09:06.930 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 208], 95.00th=[ 285], 00:09:06.930 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 404], 99.95th=[ 408], 00:09:06.930 | 99.99th=[ 420] 00:09:06.930 bw ( KiB/s): min= 9008, max= 9008, per=27.85%, avg=9008.00, stdev= 0.00, samples=1 00:09:06.930 iops : min= 2252, max= 2252, avg=2252.00, stdev= 0.00, samples=1 00:09:06.930 lat (usec) : 100=1.02%, 250=88.80%, 500=10.16%, 750=0.02% 00:09:06.930 cpu : usr=2.40%, sys=7.90%, ctx=5088, majf=0, minf=8 00:09:06.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.930 issued rwts: total=2528,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.930 00:09:06.930 Run status group 0 (all jobs): 00:09:06.930 READ: bw=27.6MiB/s (29.0MB/s), 5902KiB/s-9.86MiB/s (6044kB/s-10.3MB/s), io=27.6MiB (29.0MB), run=1001-1001msec 00:09:06.930 WRITE: bw=31.6MiB/s (33.1MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.6MiB (33.2MB), run=1001-1001msec 00:09:06.930 00:09:06.930 Disk stats (read/write): 00:09:06.930 nvme0n1: ios=1238/1536, merge=0/0, ticks=443/387, in_queue=830, util=88.08% 00:09:06.930 nvme0n2: ios=1474/1536, merge=0/0, ticks=485/308, in_queue=793, util=86.92% 00:09:06.930 nvme0n3: ios=1476/1536, merge=0/0, ticks=452/320, in_queue=772, util=88.92% 00:09:06.930 nvme0n4: ios=2048/2166, merge=0/0, ticks=427/357, in_queue=784, util=89.67% 00:09:06.930 16:12:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:06.930 [global] 00:09:06.930 thread=1 00:09:06.930 invalidate=1 00:09:06.931 rw=randwrite 00:09:06.931 time_based=1 00:09:06.931 runtime=1 00:09:06.931 ioengine=libaio 00:09:06.931 direct=1 00:09:06.931 bs=4096 00:09:06.931 iodepth=1 00:09:06.931 norandommap=0 00:09:06.931 numjobs=1 00:09:06.931 00:09:06.931 verify_dump=1 00:09:06.931 verify_backlog=512 00:09:06.931 verify_state_save=0 00:09:06.931 do_verify=1 00:09:06.931 verify=crc32c-intel 00:09:06.931 [job0] 00:09:06.931 filename=/dev/nvme0n1 00:09:06.931 [job1] 00:09:06.931 filename=/dev/nvme0n2 00:09:06.931 [job2] 00:09:06.931 filename=/dev/nvme0n3 00:09:06.931 [job3] 00:09:06.931 filename=/dev/nvme0n4 00:09:06.931 Could not set queue depth (nvme0n1) 00:09:06.931 Could not set queue depth (nvme0n2) 00:09:06.931 Could not set queue depth (nvme0n3) 00:09:06.931 Could not set queue depth (nvme0n4) 00:09:06.931 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.931 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.931 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.931 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.931 fio-3.35 00:09:06.931 Starting 4 threads 00:09:08.307 00:09:08.307 job0: (groupid=0, jobs=1): err= 0: pid=68188: Fri Jul 12 16:12:51 2024 00:09:08.307 read: IOPS=1121, BW=4488KiB/s (4595kB/s)(4492KiB/1001msec) 00:09:08.307 slat (nsec): min=15686, max=69122, avg=25056.27, stdev=8342.86 00:09:08.307 clat (usec): min=175, max=2831, avg=422.81, stdev=139.27 00:09:08.307 lat (usec): min=199, max=2855, avg=447.86, stdev=143.40 00:09:08.307 clat percentiles (usec): 00:09:08.307 | 1.00th=[ 281], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 334], 00:09:08.307 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 383], 00:09:08.307 | 70.00th=[ 457], 80.00th=[ 529], 90.00th=[ 611], 95.00th=[ 676], 00:09:08.307 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 1319], 99.95th=[ 2835], 00:09:08.307 | 99.99th=[ 2835] 00:09:08.307 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:08.307 slat (usec): min=18, max=114, avg=37.72, stdev=10.08 00:09:08.307 clat (usec): min=100, max=2227, avg=280.14, stdev=100.02 00:09:08.307 lat (usec): min=125, max=2256, avg=317.85, stdev=103.72 00:09:08.307 clat percentiles (usec): 00:09:08.307 | 1.00th=[ 109], 5.00th=[ 126], 10.00th=[ 176], 20.00th=[ 200], 00:09:08.307 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 289], 00:09:08.307 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 469], 00:09:08.307 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 701], 99.95th=[ 2212], 00:09:08.307 | 99.99th=[ 2212] 00:09:08.307 bw ( KiB/s): min= 7632, max= 7632, per=23.92%, avg=7632.00, stdev= 0.00, samples=1 00:09:08.307 iops : min= 1908, max= 1908, avg=1908.00, stdev= 0.00, samples=1 00:09:08.307 lat (usec) : 250=16.92%, 500=71.34%, 750=11.43%, 1000=0.19% 00:09:08.307 lat (msec) : 2=0.04%, 4=0.08% 00:09:08.307 cpu : usr=1.80%, sys=7.00%, ctx=2660, majf=0, minf=15 00:09:08.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.307 issued rwts: total=1123,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.307 job1: (groupid=0, jobs=1): err= 0: pid=68189: Fri Jul 12 16:12:51 2024 00:09:08.307 read: IOPS=3031, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:09:08.307 slat (nsec): min=11762, max=56968, avg=14947.01, stdev=4441.25 00:09:08.307 clat (usec): min=126, max=537, avg=161.20, stdev=28.67 00:09:08.307 lat (usec): min=139, max=551, avg=176.15, stdev=29.57 00:09:08.307 clat percentiles (usec): 00:09:08.307 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:08.307 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:09:08.307 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 194], 00:09:08.307 | 99.00th=[ 281], 99.50th=[ 375], 99.90th=[ 433], 99.95th=[ 519], 00:09:08.307 | 99.99th=[ 537] 00:09:08.307 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:08.307 slat (usec): min=13, max=102, avg=21.38, stdev= 6.75 00:09:08.307 clat (usec): min=82, max=440, avg=126.41, stdev=39.45 00:09:08.307 lat (usec): min=101, max=460, avg=147.79, stdev=42.46 00:09:08.307 clat percentiles (usec): 00:09:08.307 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 100], 00:09:08.307 | 30.00th=[ 106], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 121], 00:09:08.307 | 70.00th=[ 128], 80.00th=[ 139], 90.00th=[ 184], 95.00th=[ 202], 00:09:08.307 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 367], 00:09:08.307 | 99.99th=[ 441] 00:09:08.307 bw ( KiB/s): min=12288, max=12288, per=38.52%, avg=12288.00, stdev= 0.00, samples=1 00:09:08.307 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:08.307 lat (usec) : 100=10.12%, 250=88.34%, 500=1.51%, 750=0.03% 00:09:08.307 cpu : usr=2.60%, sys=8.40%, ctx=6112, majf=0, minf=8 00:09:08.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.307 issued rwts: total=3035,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.307 job2: (groupid=0, jobs=1): err= 0: pid=68190: Fri Jul 12 16:12:51 2024 00:09:08.307 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:08.307 slat (nsec): min=9092, max=48549, avg=16080.02, stdev=5448.93 00:09:08.307 clat (usec): min=214, max=3312, avg=350.32, stdev=92.69 00:09:08.307 lat (usec): min=229, max=3338, avg=366.40, stdev=93.15 00:09:08.307 clat percentiles (usec): 00:09:08.307 | 1.00th=[ 249], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 318], 00:09:08.307 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:09:08.307 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 429], 00:09:08.307 | 99.00th=[ 474], 99.50th=[ 529], 99.90th=[ 1647], 99.95th=[ 3326], 00:09:08.307 | 99.99th=[ 3326] 00:09:08.307 write: IOPS=1617, BW=6470KiB/s (6625kB/s)(6476KiB/1001msec); 0 zone resets 00:09:08.307 slat (nsec): min=13080, max=77133, avg=25839.31, stdev=6676.45 00:09:08.307 clat (usec): min=105, max=8104, avg=240.04, stdev=220.07 00:09:08.307 lat (usec): min=128, max=8126, avg=265.88, stdev=219.87 00:09:08.307 clat percentiles (usec): 00:09:08.307 | 1.00th=[ 119], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 186], 00:09:08.307 | 30.00th=[ 198], 40.00th=[ 210], 50.00th=[ 225], 60.00th=[ 247], 00:09:08.307 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:09:08.308 | 99.00th=[ 367], 99.50th=[ 437], 99.90th=[ 2343], 99.95th=[ 8094], 00:09:08.308 | 99.99th=[ 8094] 00:09:08.308 bw ( KiB/s): min= 8192, max= 8192, per=25.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:08.308 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:08.308 lat (usec) : 250=31.98%, 500=67.54%, 750=0.25% 00:09:08.308 lat (msec) : 2=0.10%, 4=0.10%, 10=0.03% 00:09:08.308 cpu : usr=1.40%, sys=5.90%, ctx=3155, majf=0, minf=13 00:09:08.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.308 issued rwts: total=1536,1619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.308 job3: (groupid=0, jobs=1): err= 0: pid=68191: Fri Jul 12 16:12:51 2024 00:09:08.308 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:08.308 slat (nsec): min=9081, max=64106, avg=14543.62, stdev=5830.96 00:09:08.308 clat (usec): min=222, max=1563, avg=348.90, stdev=53.47 00:09:08.308 lat (usec): min=233, max=1574, avg=363.45, stdev=53.63 00:09:08.308 clat percentiles (usec): 00:09:08.308 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 318], 00:09:08.308 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:09:08.308 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 429], 00:09:08.308 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 668], 99.95th=[ 1565], 00:09:08.308 | 99.99th=[ 1565] 00:09:08.308 write: IOPS=1754, BW=7017KiB/s (7185kB/s)(7024KiB/1001msec); 0 zone resets 00:09:08.308 slat (usec): min=12, max=117, avg=21.80, stdev= 6.79 00:09:08.308 clat (usec): min=100, max=760, avg=226.49, stdev=56.16 00:09:08.308 lat (usec): min=120, max=776, avg=248.29, stdev=56.10 00:09:08.308 clat percentiles (usec): 00:09:08.308 | 1.00th=[ 109], 5.00th=[ 125], 10.00th=[ 165], 20.00th=[ 186], 00:09:08.308 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 223], 60.00th=[ 239], 00:09:08.308 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:09:08.308 | 99.00th=[ 334], 99.50th=[ 408], 99.90th=[ 725], 99.95th=[ 758], 00:09:08.308 | 99.99th=[ 758] 00:09:08.308 bw ( KiB/s): min= 8192, max= 8192, per=25.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:08.308 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:08.308 lat (usec) : 250=35.54%, 500=63.91%, 750=0.49%, 1000=0.03% 00:09:08.308 lat (msec) : 2=0.03% 00:09:08.308 cpu : usr=0.90%, sys=5.60%, ctx=3293, majf=0, minf=9 00:09:08.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.308 issued rwts: total=1536,1756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.308 00:09:08.308 Run status group 0 (all jobs): 00:09:08.308 READ: bw=28.2MiB/s (29.6MB/s), 4488KiB/s-11.8MiB/s (4595kB/s-12.4MB/s), io=28.2MiB (29.6MB), run=1001-1001msec 00:09:08.308 WRITE: bw=31.2MiB/s (32.7MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=31.2MiB (32.7MB), run=1001-1001msec 00:09:08.308 00:09:08.308 Disk stats (read/write): 00:09:08.308 nvme0n1: ios=1074/1309, merge=0/0, ticks=457/377, in_queue=834, util=88.78% 00:09:08.308 nvme0n2: ios=2609/2707, merge=0/0, ticks=482/372, in_queue=854, util=90.50% 00:09:08.308 nvme0n3: ios=1226/1536, merge=0/0, ticks=426/353, in_queue=779, util=88.48% 00:09:08.308 nvme0n4: ios=1311/1536, merge=0/0, ticks=445/322, in_queue=767, util=89.86% 00:09:08.308 16:12:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:08.308 [global] 00:09:08.308 thread=1 00:09:08.308 invalidate=1 00:09:08.308 rw=write 00:09:08.308 time_based=1 00:09:08.308 runtime=1 00:09:08.308 ioengine=libaio 00:09:08.308 direct=1 00:09:08.308 bs=4096 00:09:08.308 iodepth=128 00:09:08.308 norandommap=0 00:09:08.308 numjobs=1 00:09:08.308 00:09:08.308 verify_dump=1 00:09:08.308 verify_backlog=512 00:09:08.308 verify_state_save=0 00:09:08.308 do_verify=1 00:09:08.308 verify=crc32c-intel 00:09:08.308 [job0] 00:09:08.308 filename=/dev/nvme0n1 00:09:08.308 [job1] 00:09:08.308 filename=/dev/nvme0n2 00:09:08.308 [job2] 00:09:08.308 filename=/dev/nvme0n3 00:09:08.308 [job3] 00:09:08.308 filename=/dev/nvme0n4 00:09:08.308 Could not set queue depth (nvme0n1) 00:09:08.308 Could not set queue depth (nvme0n2) 00:09:08.308 Could not set queue depth (nvme0n3) 00:09:08.308 Could not set queue depth (nvme0n4) 00:09:08.308 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.308 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.308 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.308 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.308 fio-3.35 00:09:08.308 Starting 4 threads 00:09:09.692 00:09:09.692 job0: (groupid=0, jobs=1): err= 0: pid=68248: Fri Jul 12 16:12:53 2024 00:09:09.692 read: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1004msec) 00:09:09.692 slat (usec): min=6, max=9695, avg=159.29, stdev=699.10 00:09:09.692 clat (usec): min=786, max=45527, avg=20771.63, stdev=4651.57 00:09:09.692 lat (usec): min=6336, max=45576, avg=20930.92, stdev=4670.08 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[ 6849], 5.00th=[15139], 10.00th=[16319], 20.00th=[18220], 00:09:09.692 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19792], 00:09:09.692 | 70.00th=[22414], 80.00th=[25560], 90.00th=[27132], 95.00th=[27657], 00:09:09.692 | 99.00th=[33817], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:09:09.692 | 99.99th=[45351] 00:09:09.692 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:09.692 slat (usec): min=8, max=6337, avg=132.42, stdev=636.34 00:09:09.692 clat (usec): min=7069, max=62654, avg=17361.91, stdev=10527.41 00:09:09.692 lat (usec): min=7100, max=62692, avg=17494.33, stdev=10602.97 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[11469], 20.00th=[11863], 00:09:09.692 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13698], 60.00th=[14353], 00:09:09.692 | 70.00th=[15008], 80.00th=[17171], 90.00th=[34341], 95.00th=[44827], 00:09:09.692 | 99.00th=[52691], 99.50th=[56361], 99.90th=[61604], 99.95th=[62653], 00:09:09.692 | 99.99th=[62653] 00:09:09.692 bw ( KiB/s): min=13779, max=14208, per=27.58%, avg=13993.50, stdev=303.35, samples=2 00:09:09.692 iops : min= 3444, max= 3552, avg=3498.00, stdev=76.37, samples=2 00:09:09.692 lat (usec) : 1000=0.01% 00:09:09.692 lat (msec) : 10=1.22%, 20=71.98%, 50=25.44%, 100=1.34% 00:09:09.692 cpu : usr=3.19%, sys=12.16%, ctx=257, majf=0, minf=8 00:09:09.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:09.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.692 issued rwts: total=3111,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.692 job1: (groupid=0, jobs=1): err= 0: pid=68249: Fri Jul 12 16:12:53 2024 00:09:09.692 read: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1003msec) 00:09:09.692 slat (usec): min=9, max=5709, avg=167.73, stdev=852.59 00:09:09.692 clat (usec): min=151, max=23689, avg=21348.37, stdev=3012.38 00:09:09.692 lat (usec): min=2455, max=23714, avg=21516.10, stdev=2902.83 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[ 2999], 5.00th=[17171], 10.00th=[20317], 20.00th=[21103], 00:09:09.692 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22414], 00:09:09.692 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:09.692 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23725], 99.95th=[23725], 00:09:09.692 | 99.99th=[23725] 00:09:09.692 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:09.692 slat (usec): min=11, max=6272, avg=157.92, stdev=768.38 00:09:09.692 clat (usec): min=14714, max=22919, avg=20604.94, stdev=1064.82 00:09:09.692 lat (usec): min=16599, max=22953, avg=20762.86, stdev=731.71 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[15926], 5.00th=[19268], 10.00th=[19792], 20.00th=[20055], 00:09:09.692 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[20841], 00:09:09.692 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21627], 95.00th=[21890], 00:09:09.692 | 99.00th=[22938], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:09:09.692 | 99.99th=[22938] 00:09:09.692 bw ( KiB/s): min=12288, max=12288, per=24.22%, avg=12288.00, stdev= 0.00, samples=2 00:09:09.692 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:09.692 lat (usec) : 250=0.02% 00:09:09.692 lat (msec) : 4=0.53%, 10=0.53%, 20=12.65%, 50=86.27% 00:09:09.692 cpu : usr=2.99%, sys=8.48%, ctx=415, majf=0, minf=9 00:09:09.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:09.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.692 issued rwts: total=2945,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.692 job2: (groupid=0, jobs=1): err= 0: pid=68251: Fri Jul 12 16:12:53 2024 00:09:09.692 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec) 00:09:09.692 slat (usec): min=5, max=6886, avg=174.15, stdev=712.87 00:09:09.692 clat (usec): min=3219, max=26348, avg=21453.36, stdev=2226.24 00:09:09.692 lat (usec): min=7044, max=26362, avg=21627.51, stdev=2132.73 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[ 8979], 5.00th=[18482], 10.00th=[19006], 20.00th=[20579], 00:09:09.692 | 30.00th=[21103], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:09:09.692 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23725], 00:09:09.692 | 99.00th=[25297], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:09:09.692 | 99.99th=[26346] 00:09:09.692 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:09.692 slat (usec): min=11, max=5449, avg=152.83, stdev=738.15 00:09:09.692 clat (usec): min=13980, max=25512, avg=20861.60, stdev=1687.06 00:09:09.692 lat (usec): min=14070, max=25531, avg=21014.43, stdev=1507.51 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[15795], 5.00th=[17957], 10.00th=[19268], 20.00th=[19792], 00:09:09.692 | 30.00th=[20317], 40.00th=[20317], 50.00th=[20841], 60.00th=[20841], 00:09:09.692 | 70.00th=[21365], 80.00th=[21627], 90.00th=[23462], 95.00th=[23987], 00:09:09.692 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:09:09.692 | 99.99th=[25560] 00:09:09.692 bw ( KiB/s): min=12288, max=12288, per=24.22%, avg=12288.00, stdev= 0.00, samples=2 00:09:09.692 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:09.692 lat (msec) : 4=0.02%, 10=0.53%, 20=19.23%, 50=80.22% 00:09:09.692 cpu : usr=2.79%, sys=8.47%, ctx=497, majf=0, minf=11 00:09:09.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:09.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.692 issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.692 job3: (groupid=0, jobs=1): err= 0: pid=68252: Fri Jul 12 16:12:53 2024 00:09:09.692 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:09:09.692 slat (usec): min=6, max=11942, avg=209.13, stdev=1149.90 00:09:09.692 clat (usec): min=12304, max=48556, avg=27040.32, stdev=8912.96 00:09:09.692 lat (usec): min=14804, max=48575, avg=27249.45, stdev=8909.58 00:09:09.692 clat percentiles (usec): 00:09:09.692 | 1.00th=[14877], 5.00th=[16581], 10.00th=[16909], 20.00th=[17433], 00:09:09.692 | 30.00th=[22676], 40.00th=[24249], 50.00th=[24773], 60.00th=[26346], 00:09:09.692 | 70.00th=[28967], 80.00th=[34341], 90.00th=[42206], 95.00th=[44303], 00:09:09.692 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:09:09.692 | 99.99th=[48497] 00:09:09.692 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1004msec); 0 zone resets 00:09:09.692 slat (usec): min=12, max=10690, avg=146.02, stdev=696.22 00:09:09.692 clat (usec): min=361, max=38673, avg=18883.97, stdev=5579.04 00:09:09.692 lat (usec): min=6820, max=38767, avg=19029.99, stdev=5560.27 00:09:09.692 clat percentiles (usec): 00:09:09.693 | 1.00th=[ 7963], 5.00th=[13435], 10.00th=[13566], 20.00th=[13829], 00:09:09.693 | 30.00th=[14877], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:09:09.693 | 70.00th=[19268], 80.00th=[22676], 90.00th=[28967], 95.00th=[29754], 00:09:09.693 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:09:09.693 | 99.99th=[38536] 00:09:09.693 bw ( KiB/s): min=10760, max=12312, per=22.73%, avg=11536.00, stdev=1097.43, samples=2 00:09:09.693 iops : min= 2690, max= 3078, avg=2884.00, stdev=274.36, samples=2 00:09:09.693 lat (usec) : 500=0.02% 00:09:09.693 lat (msec) : 10=0.57%, 20=49.88%, 50=49.52% 00:09:09.693 cpu : usr=3.39%, sys=10.37%, ctx=178, majf=0, minf=15 00:09:09.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:09.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.693 issued rwts: total=2560,3009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.693 00:09:09.693 Run status group 0 (all jobs): 00:09:09.693 READ: bw=44.9MiB/s (47.0MB/s), 9.96MiB/s-12.1MiB/s (10.4MB/s-12.7MB/s), io=45.0MiB (47.2MB), run=1003-1004msec 00:09:09.693 WRITE: bw=49.6MiB/s (52.0MB/s), 11.7MiB/s-13.9MiB/s (12.3MB/s-14.6MB/s), io=49.8MiB (52.2MB), run=1003-1004msec 00:09:09.693 00:09:09.693 Disk stats (read/write): 00:09:09.693 nvme0n1: ios=2610/3072, merge=0/0, ticks=26771/23192, in_queue=49963, util=88.67% 00:09:09.693 nvme0n2: ios=2609/2560, merge=0/0, ticks=13200/11604, in_queue=24804, util=89.07% 00:09:09.693 nvme0n3: ios=2560/2592, merge=0/0, ticks=13953/10058, in_queue=24011, util=89.28% 00:09:09.693 nvme0n4: ios=2304/2560, merge=0/0, ticks=15107/9560, in_queue=24667, util=89.73% 00:09:09.693 16:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:09.693 [global] 00:09:09.693 thread=1 00:09:09.693 invalidate=1 00:09:09.693 rw=randwrite 00:09:09.693 time_based=1 00:09:09.693 runtime=1 00:09:09.693 ioengine=libaio 00:09:09.693 direct=1 00:09:09.693 bs=4096 00:09:09.693 iodepth=128 00:09:09.693 norandommap=0 00:09:09.693 numjobs=1 00:09:09.693 00:09:09.693 verify_dump=1 00:09:09.693 verify_backlog=512 00:09:09.693 verify_state_save=0 00:09:09.693 do_verify=1 00:09:09.693 verify=crc32c-intel 00:09:09.693 [job0] 00:09:09.693 filename=/dev/nvme0n1 00:09:09.693 [job1] 00:09:09.693 filename=/dev/nvme0n2 00:09:09.693 [job2] 00:09:09.693 filename=/dev/nvme0n3 00:09:09.693 [job3] 00:09:09.693 filename=/dev/nvme0n4 00:09:09.693 Could not set queue depth (nvme0n1) 00:09:09.693 Could not set queue depth (nvme0n2) 00:09:09.693 Could not set queue depth (nvme0n3) 00:09:09.693 Could not set queue depth (nvme0n4) 00:09:09.693 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.693 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.693 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.693 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.693 fio-3.35 00:09:09.693 Starting 4 threads 00:09:11.067 00:09:11.067 job0: (groupid=0, jobs=1): err= 0: pid=68305: Fri Jul 12 16:12:54 2024 00:09:11.067 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:09:11.067 slat (usec): min=8, max=5145, avg=84.90, stdev=378.01 00:09:11.067 clat (usec): min=7083, max=17007, avg=11174.70, stdev=1297.31 00:09:11.067 lat (usec): min=7107, max=17072, avg=11259.60, stdev=1306.91 00:09:11.067 clat percentiles (usec): 00:09:11.067 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10159], 00:09:11.067 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11338], 60.00th=[11731], 00:09:11.067 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[13173], 00:09:11.067 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16581], 99.95th=[16581], 00:09:11.067 | 99.99th=[16909] 00:09:11.067 write: IOPS=5969, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1005msec); 0 zone resets 00:09:11.067 slat (usec): min=12, max=4392, avg=79.38, stdev=393.66 00:09:11.067 clat (usec): min=4501, max=16707, avg=10704.50, stdev=1400.50 00:09:11.067 lat (usec): min=4966, max=17066, avg=10783.87, stdev=1447.41 00:09:11.067 clat percentiles (usec): 00:09:11.067 | 1.00th=[ 6652], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9634], 00:09:11.067 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:09:11.067 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12387], 95.00th=[12780], 00:09:11.067 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16712], 99.95th=[16712], 00:09:11.067 | 99.99th=[16712] 00:09:11.067 bw ( KiB/s): min=23272, max=23704, per=38.79%, avg=23488.00, stdev=305.47, samples=2 00:09:11.067 iops : min= 5818, max= 5926, avg=5872.00, stdev=76.37, samples=2 00:09:11.067 lat (msec) : 10=21.62%, 20=78.38% 00:09:11.067 cpu : usr=4.18%, sys=16.73%, ctx=490, majf=0, minf=7 00:09:11.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:11.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.068 issued rwts: total=5632,5999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.068 job1: (groupid=0, jobs=1): err= 0: pid=68306: Fri Jul 12 16:12:54 2024 00:09:11.068 read: IOPS=2323, BW=9295KiB/s (9518kB/s)(9332KiB/1004msec) 00:09:11.068 slat (usec): min=6, max=9806, avg=224.07, stdev=797.94 00:09:11.068 clat (usec): min=3818, max=41354, avg=27968.02, stdev=5576.75 00:09:11.068 lat (usec): min=3831, max=41377, avg=28192.09, stdev=5584.81 00:09:11.068 clat percentiles (usec): 00:09:11.068 | 1.00th=[ 7767], 5.00th=[19792], 10.00th=[22152], 20.00th=[24249], 00:09:11.068 | 30.00th=[25035], 40.00th=[25560], 50.00th=[27919], 60.00th=[29492], 00:09:11.068 | 70.00th=[31065], 80.00th=[32637], 90.00th=[34866], 95.00th=[36439], 00:09:11.068 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[41157], 00:09:11.068 | 99.99th=[41157] 00:09:11.068 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:09:11.068 slat (usec): min=5, max=9630, avg=178.22, stdev=760.91 00:09:11.068 clat (usec): min=14039, max=40495, avg=23925.55, stdev=5383.33 00:09:11.068 lat (usec): min=14062, max=41751, avg=24103.76, stdev=5403.61 00:09:11.068 clat percentiles (usec): 00:09:11.068 | 1.00th=[15926], 5.00th=[17171], 10.00th=[17695], 20.00th=[18482], 00:09:11.068 | 30.00th=[20317], 40.00th=[22414], 50.00th=[23200], 60.00th=[24511], 00:09:11.068 | 70.00th=[26346], 80.00th=[27657], 90.00th=[31065], 95.00th=[35390], 00:09:11.068 | 99.00th=[39060], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:09:11.068 | 99.99th=[40633] 00:09:11.068 bw ( KiB/s): min= 8192, max=12288, per=16.91%, avg=10240.00, stdev=2896.31, samples=2 00:09:11.068 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:09:11.068 lat (msec) : 4=0.22%, 10=0.35%, 20=17.04%, 50=82.38% 00:09:11.068 cpu : usr=1.79%, sys=8.57%, ctx=759, majf=0, minf=13 00:09:11.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:11.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.068 issued rwts: total=2333,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.068 job2: (groupid=0, jobs=1): err= 0: pid=68307: Fri Jul 12 16:12:54 2024 00:09:11.068 read: IOPS=2146, BW=8586KiB/s (8792kB/s)(8612KiB/1003msec) 00:09:11.068 slat (usec): min=4, max=7284, avg=220.74, stdev=787.58 00:09:11.068 clat (usec): min=695, max=42756, avg=27258.06, stdev=5409.18 00:09:11.068 lat (usec): min=2016, max=42770, avg=27478.80, stdev=5413.39 00:09:11.068 clat percentiles (usec): 00:09:11.068 | 1.00th=[ 4555], 5.00th=[21627], 10.00th=[23462], 20.00th=[24511], 00:09:11.068 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26870], 00:09:11.068 | 70.00th=[29492], 80.00th=[31851], 90.00th=[34341], 95.00th=[35914], 00:09:11.068 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:11.068 | 99.99th=[42730] 00:09:11.068 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:09:11.068 slat (usec): min=8, max=9832, avg=197.83, stdev=784.59 00:09:11.068 clat (usec): min=14795, max=39534, avg=26256.78, stdev=5478.76 00:09:11.068 lat (usec): min=14819, max=39550, avg=26454.61, stdev=5481.40 00:09:11.068 clat percentiles (usec): 00:09:11.068 | 1.00th=[15533], 5.00th=[17695], 10.00th=[20055], 20.00th=[21890], 00:09:11.068 | 30.00th=[22938], 40.00th=[23725], 50.00th=[25822], 60.00th=[26608], 00:09:11.068 | 70.00th=[28705], 80.00th=[31065], 90.00th=[35390], 95.00th=[36439], 00:09:11.068 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:09:11.068 | 99.99th=[39584] 00:09:11.068 bw ( KiB/s): min=10888, max=10888, per=17.98%, avg=10888.00, stdev= 0.00, samples=1 00:09:11.068 iops : min= 2722, max= 2722, avg=2722.00, stdev= 0.00, samples=1 00:09:11.068 lat (usec) : 750=0.02% 00:09:11.068 lat (msec) : 4=0.17%, 10=0.51%, 20=6.26%, 50=93.04% 00:09:11.068 cpu : usr=2.30%, sys=7.78%, ctx=753, majf=0, minf=15 00:09:11.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:11.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.068 issued rwts: total=2153,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.068 job3: (groupid=0, jobs=1): err= 0: pid=68308: Fri Jul 12 16:12:54 2024 00:09:11.068 read: IOPS=3618, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1005msec) 00:09:11.068 slat (usec): min=6, max=8381, avg=127.90, stdev=549.32 00:09:11.068 clat (usec): min=2175, max=40711, avg=16535.43, stdev=7319.25 00:09:11.068 lat (usec): min=4787, max=40732, avg=16663.33, stdev=7356.90 00:09:11.068 clat percentiles (usec): 00:09:11.068 | 1.00th=[10552], 5.00th=[12911], 10.00th=[13042], 20.00th=[13304], 00:09:11.068 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:09:11.068 | 70.00th=[13960], 80.00th=[14222], 90.00th=[32900], 95.00th=[34341], 00:09:11.068 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:09:11.068 | 99.99th=[40633] 00:09:11.068 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:11.068 slat (usec): min=10, max=9208, avg=123.02, stdev=553.20 00:09:11.068 clat (usec): min=9618, max=38432, avg=16293.32, stdev=6943.86 00:09:11.068 lat (usec): min=10728, max=38456, avg=16416.34, stdev=6978.72 00:09:11.068 clat percentiles (usec): 00:09:11.068 | 1.00th=[10552], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:09:11.068 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:09:11.068 | 70.00th=[13566], 80.00th=[15795], 90.00th=[30540], 95.00th=[32637], 00:09:11.068 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:09:11.068 | 99.99th=[38536] 00:09:11.068 bw ( KiB/s): min=11688, max=20480, per=26.56%, avg=16084.00, stdev=6216.88, samples=2 00:09:11.068 iops : min= 2922, max= 5120, avg=4021.00, stdev=1554.22, samples=2 00:09:11.068 lat (msec) : 4=0.01%, 10=0.34%, 20=83.29%, 50=16.36% 00:09:11.068 cpu : usr=3.49%, sys=12.05%, ctx=469, majf=0, minf=10 00:09:11.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:11.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.068 issued rwts: total=3637,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.068 00:09:11.068 Run status group 0 (all jobs): 00:09:11.068 READ: bw=53.5MiB/s (56.1MB/s), 8586KiB/s-21.9MiB/s (8792kB/s-23.0MB/s), io=53.7MiB (56.3MB), run=1003-1005msec 00:09:11.068 WRITE: bw=59.1MiB/s (62.0MB/s), 9.96MiB/s-23.3MiB/s (10.4MB/s-24.4MB/s), io=59.4MiB (62.3MB), run=1003-1005msec 00:09:11.068 00:09:11.068 Disk stats (read/write): 00:09:11.068 nvme0n1: ios=4767/5120, merge=0/0, ticks=25802/23386, in_queue=49188, util=89.07% 00:09:11.068 nvme0n2: ios=2097/2317, merge=0/0, ticks=17249/14633, in_queue=31882, util=89.48% 00:09:11.068 nvme0n3: ios=2048/2088, merge=0/0, ticks=17095/14730, in_queue=31825, util=88.17% 00:09:11.068 nvme0n4: ios=3570/3584, merge=0/0, ticks=13570/10525, in_queue=24095, util=89.13% 00:09:11.068 16:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:11.068 16:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68327 00:09:11.068 16:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:11.068 16:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:11.068 [global] 00:09:11.068 thread=1 00:09:11.068 invalidate=1 00:09:11.068 rw=read 00:09:11.068 time_based=1 00:09:11.068 runtime=10 00:09:11.068 ioengine=libaio 00:09:11.068 direct=1 00:09:11.068 bs=4096 00:09:11.068 iodepth=1 00:09:11.068 norandommap=1 00:09:11.068 numjobs=1 00:09:11.068 00:09:11.068 [job0] 00:09:11.068 filename=/dev/nvme0n1 00:09:11.068 [job1] 00:09:11.068 filename=/dev/nvme0n2 00:09:11.068 [job2] 00:09:11.068 filename=/dev/nvme0n3 00:09:11.068 [job3] 00:09:11.068 filename=/dev/nvme0n4 00:09:11.068 Could not set queue depth (nvme0n1) 00:09:11.068 Could not set queue depth (nvme0n2) 00:09:11.068 Could not set queue depth (nvme0n3) 00:09:11.068 Could not set queue depth (nvme0n4) 00:09:11.068 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.068 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.068 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.068 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.068 fio-3.35 00:09:11.068 Starting 4 threads 00:09:14.356 16:12:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:14.356 fio: pid=68370, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:14.356 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63733760, buflen=4096 00:09:14.356 16:12:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:14.356 fio: pid=68369, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:14.356 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=45535232, buflen=4096 00:09:14.356 16:12:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.356 16:12:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:14.613 fio: pid=68367, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:14.613 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11382784, buflen=4096 00:09:14.613 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.613 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:14.872 fio: pid=68368, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:14.872 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55238656, buflen=4096 00:09:14.872 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.872 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:14.872 00:09:14.872 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68367: Fri Jul 12 16:12:58 2024 00:09:14.872 read: IOPS=5605, BW=21.9MiB/s (23.0MB/s)(74.9MiB/3419msec) 00:09:14.872 slat (usec): min=11, max=14662, avg=16.94, stdev=147.28 00:09:14.872 clat (usec): min=126, max=2018, avg=159.83, stdev=26.01 00:09:14.872 lat (usec): min=138, max=14910, avg=176.77, stdev=150.91 00:09:14.872 clat percentiles (usec): 00:09:14.872 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:09:14.872 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:09:14.872 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:09:14.872 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 326], 99.95th=[ 529], 00:09:14.872 | 99.99th=[ 1532] 00:09:14.872 bw ( KiB/s): min=21360, max=23136, per=34.86%, avg=22528.00, stdev=819.74, samples=6 00:09:14.872 iops : min= 5340, max= 5784, avg=5632.00, stdev=204.94, samples=6 00:09:14.872 lat (usec) : 250=99.76%, 500=0.17%, 750=0.04% 00:09:14.872 lat (msec) : 2=0.02%, 4=0.01% 00:09:14.872 cpu : usr=1.96%, sys=7.49%, ctx=19170, majf=0, minf=1 00:09:14.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 issued rwts: total=19164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.872 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68368: Fri Jul 12 16:12:58 2024 00:09:14.872 read: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(52.7MiB/3672msec) 00:09:14.872 slat (usec): min=11, max=9700, avg=18.73, stdev=179.63 00:09:14.872 clat (usec): min=120, max=4792, avg=251.74, stdev=81.76 00:09:14.872 lat (usec): min=133, max=9903, avg=270.47, stdev=197.12 00:09:14.872 clat percentiles (usec): 00:09:14.872 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 157], 20.00th=[ 233], 00:09:14.872 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:09:14.872 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:09:14.872 | 99.00th=[ 416], 99.50th=[ 482], 99.90th=[ 914], 99.95th=[ 1483], 00:09:14.872 | 99.99th=[ 3851] 00:09:14.872 bw ( KiB/s): min=13176, max=18527, per=22.41%, avg=14483.29, stdev=1821.53, samples=7 00:09:14.872 iops : min= 3294, max= 4631, avg=3620.71, stdev=455.11, samples=7 00:09:14.872 lat (usec) : 250=31.30%, 500=68.24%, 750=0.32%, 1000=0.06% 00:09:14.872 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:09:14.872 cpu : usr=1.04%, sys=5.15%, ctx=13496, majf=0, minf=1 00:09:14.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 issued rwts: total=13487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.872 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68369: Fri Jul 12 16:12:58 2024 00:09:14.872 read: IOPS=3487, BW=13.6MiB/s (14.3MB/s)(43.4MiB/3188msec) 00:09:14.872 slat (usec): min=12, max=9682, avg=16.39, stdev=114.12 00:09:14.872 clat (usec): min=138, max=6141, avg=268.64, stdev=74.66 00:09:14.872 lat (usec): min=151, max=9988, avg=285.03, stdev=137.14 00:09:14.872 clat percentiles (usec): 00:09:14.872 | 1.00th=[ 184], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 251], 00:09:14.872 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:09:14.872 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:09:14.872 | 99.00th=[ 322], 99.50th=[ 433], 99.90th=[ 676], 99.95th=[ 1237], 00:09:14.872 | 99.99th=[ 3228] 00:09:14.872 bw ( KiB/s): min=13456, max=14216, per=21.62%, avg=13970.67, stdev=288.70, samples=6 00:09:14.872 iops : min= 3364, max= 3554, avg=3492.67, stdev=72.17, samples=6 00:09:14.872 lat (usec) : 250=17.75%, 500=81.87%, 750=0.31%, 1000=0.02% 00:09:14.872 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:09:14.872 cpu : usr=1.35%, sys=4.49%, ctx=11123, majf=0, minf=1 00:09:14.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 issued rwts: total=11118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.872 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68370: Fri Jul 12 16:12:58 2024 00:09:14.872 read: IOPS=5280, BW=20.6MiB/s (21.6MB/s)(60.8MiB/2947msec) 00:09:14.872 slat (nsec): min=11253, max=87586, avg=14527.46, stdev=3585.37 00:09:14.872 clat (usec): min=142, max=6847, avg=173.30, stdev=121.61 00:09:14.872 lat (usec): min=154, max=6859, avg=187.83, stdev=121.82 00:09:14.872 clat percentiles (usec): 00:09:14.872 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:09:14.872 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:09:14.872 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:09:14.872 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 644], 99.95th=[ 3326], 00:09:14.872 | 99.99th=[ 6783] 00:09:14.872 bw ( KiB/s): min=19992, max=21968, per=32.59%, avg=21064.00, stdev=892.87, samples=5 00:09:14.872 iops : min= 4998, max= 5492, avg=5266.00, stdev=223.22, samples=5 00:09:14.872 lat (usec) : 250=99.82%, 500=0.06%, 750=0.03%, 1000=0.01% 00:09:14.872 lat (msec) : 2=0.01%, 4=0.04%, 10=0.03% 00:09:14.872 cpu : usr=1.63%, sys=7.06%, ctx=15565, majf=0, minf=1 00:09:14.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.872 issued rwts: total=15561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.872 00:09:14.872 Run status group 0 (all jobs): 00:09:14.872 READ: bw=63.1MiB/s (66.2MB/s), 13.6MiB/s-21.9MiB/s (14.3MB/s-23.0MB/s), io=232MiB (243MB), run=2947-3672msec 00:09:14.872 00:09:14.872 Disk stats (read/write): 00:09:14.872 nvme0n1: ios=18870/0, merge=0/0, ticks=3070/0, in_queue=3070, util=95.28% 00:09:14.872 nvme0n2: ios=13201/0, merge=0/0, ticks=3403/0, in_queue=3403, util=95.48% 00:09:14.872 nvme0n3: ios=10867/0, merge=0/0, ticks=2932/0, in_queue=2932, util=96.24% 00:09:14.872 nvme0n4: ios=15136/0, merge=0/0, ticks=2652/0, in_queue=2652, util=96.29% 00:09:15.130 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.130 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:15.389 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.389 16:12:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:15.648 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.648 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:15.908 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.908 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68327 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:16.167 nvmf hotplug test: fio failed as expected 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:16.167 16:12:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:16.426 rmmod nvme_tcp 00:09:16.426 rmmod nvme_fabrics 00:09:16.426 rmmod nvme_keyring 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67954 ']' 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67954 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 67954 ']' 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 67954 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67954 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:16.426 killing process with pid 67954 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67954' 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 67954 00:09:16.426 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 67954 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:16.685 00:09:16.685 real 0m18.674s 00:09:16.685 user 1m10.483s 00:09:16.685 sys 0m10.317s 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.685 16:13:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.685 ************************************ 00:09:16.685 END TEST nvmf_fio_target 00:09:16.685 ************************************ 00:09:16.685 16:13:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:16.685 16:13:00 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.685 16:13:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:16.685 16:13:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.685 16:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.685 ************************************ 00:09:16.685 START TEST nvmf_bdevio 00:09:16.685 ************************************ 00:09:16.685 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.945 * Looking for test storage... 00:09:16.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:16.945 Cannot find device "nvmf_tgt_br" 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.945 Cannot find device "nvmf_tgt_br2" 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:16.945 Cannot find device "nvmf_tgt_br" 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:16.945 Cannot find device "nvmf_tgt_br2" 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.945 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:17.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:09:17.206 00:09:17.206 --- 10.0.0.2 ping statistics --- 00:09:17.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.206 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:17.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:17.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:17.206 00:09:17.206 --- 10.0.0.3 ping statistics --- 00:09:17.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.206 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:17.206 00:09:17.206 --- 10.0.0.1 ping statistics --- 00:09:17.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.206 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68633 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68633 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 68633 ']' 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.206 16:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.206 [2024-07-12 16:13:00.902520] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:09:17.206 [2024-07-12 16:13:00.902628] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.465 [2024-07-12 16:13:01.044155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.465 [2024-07-12 16:13:01.102472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.465 [2024-07-12 16:13:01.102532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.465 [2024-07-12 16:13:01.102541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.465 [2024-07-12 16:13:01.102549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.465 [2024-07-12 16:13:01.102555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.465 [2024-07-12 16:13:01.103589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.465 [2024-07-12 16:13:01.103780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.465 [2024-07-12 16:13:01.103997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:17.465 [2024-07-12 16:13:01.104137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.465 [2024-07-12 16:13:01.134416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 [2024-07-12 16:13:01.862535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 Malloc0 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 [2024-07-12 16:13:01.921347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:18.401 { 00:09:18.401 "params": { 00:09:18.401 "name": "Nvme$subsystem", 00:09:18.401 "trtype": "$TEST_TRANSPORT", 00:09:18.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.401 "adrfam": "ipv4", 00:09:18.401 "trsvcid": "$NVMF_PORT", 00:09:18.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.401 "hdgst": ${hdgst:-false}, 00:09:18.401 "ddgst": ${ddgst:-false} 00:09:18.401 }, 00:09:18.401 "method": "bdev_nvme_attach_controller" 00:09:18.401 } 00:09:18.401 EOF 00:09:18.401 )") 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:18.401 16:13:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:18.401 "params": { 00:09:18.401 "name": "Nvme1", 00:09:18.401 "trtype": "tcp", 00:09:18.401 "traddr": "10.0.0.2", 00:09:18.401 "adrfam": "ipv4", 00:09:18.401 "trsvcid": "4420", 00:09:18.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.401 "hdgst": false, 00:09:18.401 "ddgst": false 00:09:18.401 }, 00:09:18.401 "method": "bdev_nvme_attach_controller" 00:09:18.401 }' 00:09:18.401 [2024-07-12 16:13:01.980514] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:09:18.401 [2024-07-12 16:13:01.980611] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68669 ] 00:09:18.401 [2024-07-12 16:13:02.121166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:18.659 [2024-07-12 16:13:02.196058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.659 [2024-07-12 16:13:02.196202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.659 [2024-07-12 16:13:02.196208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.659 [2024-07-12 16:13:02.239110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.659 I/O targets: 00:09:18.659 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:18.659 00:09:18.659 00:09:18.659 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.659 http://cunit.sourceforge.net/ 00:09:18.659 00:09:18.659 00:09:18.659 Suite: bdevio tests on: Nvme1n1 00:09:18.659 Test: blockdev write read block ...passed 00:09:18.659 Test: blockdev write zeroes read block ...passed 00:09:18.659 Test: blockdev write zeroes read no split ...passed 00:09:18.659 Test: blockdev write zeroes read split ...passed 00:09:18.659 Test: blockdev write zeroes read split partial ...passed 00:09:18.659 Test: blockdev reset ...[2024-07-12 16:13:02.373934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:18.659 [2024-07-12 16:13:02.374206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ea440 (9): Bad file descriptor 00:09:18.918 [2024-07-12 16:13:02.391760] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:18.918 passed 00:09:18.918 Test: blockdev write read 8 blocks ...passed 00:09:18.918 Test: blockdev write read size > 128k ...passed 00:09:18.918 Test: blockdev write read invalid size ...passed 00:09:18.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:18.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:18.918 Test: blockdev write read max offset ...passed 00:09:18.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:18.918 Test: blockdev writev readv 8 blocks ...passed 00:09:18.918 Test: blockdev writev readv 30 x 1block ...passed 00:09:18.918 Test: blockdev writev readv block ...passed 00:09:18.918 Test: blockdev writev readv size > 128k ...passed 00:09:18.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:18.918 Test: blockdev comparev and writev ...[2024-07-12 16:13:02.400662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.918 [2024-07-12 16:13:02.400751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:18.918 [2024-07-12 16:13:02.400772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.918 [2024-07-12 16:13:02.400784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:18.918 [2024-07-12 16:13:02.401106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.918 [2024-07-12 16:13:02.401139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:18.918 [2024-07-12 16:13:02.401167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.918 [2024-07-12 16:13:02.401178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:18.918 [2024-07-12 16:13:02.401556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.919 [2024-07-12 16:13:02.401587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:18.919 [2024-07-12 16:13:02.401605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.919 [2024-07-12 16:13:02.401615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:18.919 [2024-07-12 16:13:02.401962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.919 [2024-07-12 16:13:02.401994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:18.919 [2024-07-12 16:13:02.402013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.919 [2024-07-12 16:13:02.402023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:18.919 passed 00:09:18.919 Test: blockdev nvme passthru rw ...passed 00:09:18.919 Test: blockdev nvme passthru vendor specific ...[2024-07-12 16:13:02.403180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.919 [2024-07-12 16:13:02.403319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:18.919 [2024-07-12 16:13:02.403715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.919 [2024-07-12 16:13:02.403747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:18.919 [2024-07-12 16:13:02.403987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.919 [2024-07-12 16:13:02.404018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:18.919 [2024-07-12 16:13:02.404135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.919 [2024-07-12 16:13:02.404261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:18.919 passed 00:09:18.919 Test: blockdev nvme admin passthru ...passed 00:09:18.919 Test: blockdev copy ...passed 00:09:18.919 00:09:18.919 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.919 suites 1 1 n/a 0 0 00:09:18.919 tests 23 23 23 0 0 00:09:18.919 asserts 152 152 152 0 n/a 00:09:18.919 00:09:18.919 Elapsed time = 0.158 seconds 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.919 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.919 rmmod nvme_tcp 00:09:19.178 rmmod nvme_fabrics 00:09:19.178 rmmod nvme_keyring 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68633 ']' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68633 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 68633 ']' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 68633 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68633 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:19.178 killing process with pid 68633 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68633' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 68633 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 68633 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.178 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.436 16:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.436 ************************************ 00:09:19.436 END TEST nvmf_bdevio 00:09:19.436 ************************************ 00:09:19.436 00:09:19.436 real 0m2.521s 00:09:19.436 user 0m8.250s 00:09:19.436 sys 0m0.634s 00:09:19.436 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.436 16:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:19.436 16:13:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:19.436 16:13:02 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:19.436 16:13:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.436 16:13:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.436 16:13:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.436 ************************************ 00:09:19.436 START TEST nvmf_auth_target 00:09:19.436 ************************************ 00:09:19.436 16:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:19.436 * Looking for test storage... 00:09:19.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.436 16:13:03 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.437 Cannot find device "nvmf_tgt_br" 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.437 Cannot find device "nvmf_tgt_br2" 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.437 Cannot find device "nvmf_tgt_br" 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.437 Cannot find device "nvmf_tgt_br2" 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:19.437 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:19.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:09:19.695 00:09:19.695 --- 10.0.0.2 ping statistics --- 00:09:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.695 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:19.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:19.695 00:09:19.695 --- 10.0.0.3 ping statistics --- 00:09:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.695 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:19.695 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:19.695 00:09:19.695 --- 10.0.0.1 ping statistics --- 00:09:19.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.695 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68841 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68841 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 68841 ']' 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.953 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.954 16:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68879 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c17761add512b406d554ab8b39279805a2a3c7f9ce5e9228 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yX3 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c17761add512b406d554ab8b39279805a2a3c7f9ce5e9228 0 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c17761add512b406d554ab8b39279805a2a3c7f9ce5e9228 0 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c17761add512b406d554ab8b39279805a2a3c7f9ce5e9228 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:20.889 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yX3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yX3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.yX3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b5f5696aa378913ffeeef5e33c6a303e649b5a1718645258db1830a23dd482a2 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CW3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b5f5696aa378913ffeeef5e33c6a303e649b5a1718645258db1830a23dd482a2 3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b5f5696aa378913ffeeef5e33c6a303e649b5a1718645258db1830a23dd482a2 3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b5f5696aa378913ffeeef5e33c6a303e649b5a1718645258db1830a23dd482a2 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CW3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CW3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.CW3 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5af7cd2df47d3d2eddafa15d42831866 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iPJ 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5af7cd2df47d3d2eddafa15d42831866 1 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5af7cd2df47d3d2eddafa15d42831866 1 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5af7cd2df47d3d2eddafa15d42831866 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iPJ 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iPJ 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.iPJ 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cfb56de7738c773dcd7593ca4b67f423819b15ad214749c0 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qix 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cfb56de7738c773dcd7593ca4b67f423819b15ad214749c0 2 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cfb56de7738c773dcd7593ca4b67f423819b15ad214749c0 2 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cfb56de7738c773dcd7593ca4b67f423819b15ad214749c0 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qix 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qix 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Qix 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=24939eb6c6363a7d11eb4cd73e9513ccf36eae78e6a14483 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hgL 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 24939eb6c6363a7d11eb4cd73e9513ccf36eae78e6a14483 2 00:09:21.149 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 24939eb6c6363a7d11eb4cd73e9513ccf36eae78e6a14483 2 00:09:21.150 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:21.150 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:21.150 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=24939eb6c6363a7d11eb4cd73e9513ccf36eae78e6a14483 00:09:21.150 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:21.150 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.408 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hgL 00:09:21.408 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hgL 00:09:21.408 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.hgL 00:09:21.408 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:21.408 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:21.408 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8d7ee4f2378d20c8899d37f490340fdf 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hSU 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8d7ee4f2378d20c8899d37f490340fdf 1 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8d7ee4f2378d20c8899d37f490340fdf 1 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8d7ee4f2378d20c8899d37f490340fdf 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hSU 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hSU 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.hSU 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9218a32e5bebdc2e7cec715c765a54a7f0cca956088c91fdb8619188490e7e37 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.109 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9218a32e5bebdc2e7cec715c765a54a7f0cca956088c91fdb8619188490e7e37 3 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9218a32e5bebdc2e7cec715c765a54a7f0cca956088c91fdb8619188490e7e37 3 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9218a32e5bebdc2e7cec715c765a54a7f0cca956088c91fdb8619188490e7e37 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:21.409 16:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.109 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.109 00:09:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.109 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68841 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 68841 ']' 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.409 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68879 /var/tmp/host.sock 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 68879 ']' 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.667 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yX3 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yX3 00:09:21.925 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yX3 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.CW3 ]] 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CW3 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CW3 00:09:22.184 16:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CW3 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iPJ 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.iPJ 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.iPJ 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Qix ]] 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qix 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qix 00:09:22.796 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qix 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hgL 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.hgL 00:09:23.056 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.hgL 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.hSU ]] 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hSU 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hSU 00:09:23.315 16:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hSU 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.109 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.109 00:09:23.574 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.109 00:09:23.832 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:09:23.832 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:09:23.832 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:23.832 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:23.832 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:23.832 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.090 16:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.348 00:09:24.348 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:24.348 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:24.348 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:24.608 { 00:09:24.608 "cntlid": 1, 00:09:24.608 "qid": 0, 00:09:24.608 "state": "enabled", 00:09:24.608 "thread": "nvmf_tgt_poll_group_000", 00:09:24.608 "listen_address": { 00:09:24.608 "trtype": "TCP", 00:09:24.608 "adrfam": "IPv4", 00:09:24.608 "traddr": "10.0.0.2", 00:09:24.608 "trsvcid": "4420" 00:09:24.608 }, 00:09:24.608 "peer_address": { 00:09:24.608 "trtype": "TCP", 00:09:24.608 "adrfam": "IPv4", 00:09:24.608 "traddr": "10.0.0.1", 00:09:24.608 "trsvcid": "36796" 00:09:24.608 }, 00:09:24.608 "auth": { 00:09:24.608 "state": "completed", 00:09:24.608 "digest": "sha256", 00:09:24.608 "dhgroup": "null" 00:09:24.608 } 00:09:24.608 } 00:09:24.608 ]' 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.608 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:24.867 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:24.867 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:24.867 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:24.867 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:24.867 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:25.125 16:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:29.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:29.315 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:29.882 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:29.882 00:09:30.141 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:30.141 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:30.141 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:30.400 { 00:09:30.400 "cntlid": 3, 00:09:30.400 "qid": 0, 00:09:30.400 "state": "enabled", 00:09:30.400 "thread": "nvmf_tgt_poll_group_000", 00:09:30.400 "listen_address": { 00:09:30.400 "trtype": "TCP", 00:09:30.400 "adrfam": "IPv4", 00:09:30.400 "traddr": "10.0.0.2", 00:09:30.400 "trsvcid": "4420" 00:09:30.400 }, 00:09:30.400 "peer_address": { 00:09:30.400 "trtype": "TCP", 00:09:30.400 "adrfam": "IPv4", 00:09:30.400 "traddr": "10.0.0.1", 00:09:30.400 "trsvcid": "36826" 00:09:30.400 }, 00:09:30.400 "auth": { 00:09:30.400 "state": "completed", 00:09:30.400 "digest": "sha256", 00:09:30.400 "dhgroup": "null" 00:09:30.400 } 00:09:30.400 } 00:09:30.400 ]' 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:30.400 16:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:30.400 16:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:30.400 16:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:30.400 16:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:30.658 16:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:31.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:31.594 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:31.852 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.111 00:09:32.111 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:32.111 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:32.111 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:32.370 { 00:09:32.370 "cntlid": 5, 00:09:32.370 "qid": 0, 00:09:32.370 "state": "enabled", 00:09:32.370 "thread": "nvmf_tgt_poll_group_000", 00:09:32.370 "listen_address": { 00:09:32.370 "trtype": "TCP", 00:09:32.370 "adrfam": "IPv4", 00:09:32.370 "traddr": "10.0.0.2", 00:09:32.370 "trsvcid": "4420" 00:09:32.370 }, 00:09:32.370 "peer_address": { 00:09:32.370 "trtype": "TCP", 00:09:32.370 "adrfam": "IPv4", 00:09:32.370 "traddr": "10.0.0.1", 00:09:32.370 "trsvcid": "51748" 00:09:32.370 }, 00:09:32.370 "auth": { 00:09:32.370 "state": "completed", 00:09:32.370 "digest": "sha256", 00:09:32.370 "dhgroup": "null" 00:09:32.370 } 00:09:32.370 } 00:09:32.370 ]' 00:09:32.370 16:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:32.370 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:32.370 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:32.370 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:32.370 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:32.629 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:32.629 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:32.629 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:32.888 16:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:33.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:33.455 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:33.714 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:33.973 00:09:33.973 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:33.973 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:33.973 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:34.231 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:34.232 { 00:09:34.232 "cntlid": 7, 00:09:34.232 "qid": 0, 00:09:34.232 "state": "enabled", 00:09:34.232 "thread": "nvmf_tgt_poll_group_000", 00:09:34.232 "listen_address": { 00:09:34.232 "trtype": "TCP", 00:09:34.232 "adrfam": "IPv4", 00:09:34.232 "traddr": "10.0.0.2", 00:09:34.232 "trsvcid": "4420" 00:09:34.232 }, 00:09:34.232 "peer_address": { 00:09:34.232 "trtype": "TCP", 00:09:34.232 "adrfam": "IPv4", 00:09:34.232 "traddr": "10.0.0.1", 00:09:34.232 "trsvcid": "51782" 00:09:34.232 }, 00:09:34.232 "auth": { 00:09:34.232 "state": "completed", 00:09:34.232 "digest": "sha256", 00:09:34.232 "dhgroup": "null" 00:09:34.232 } 00:09:34.232 } 00:09:34.232 ]' 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:34.232 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:34.490 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:34.491 16:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:34.491 16:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:34.491 16:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:34.491 16:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:34.749 16:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:35.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:35.317 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:35.884 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.143 00:09:36.143 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:36.143 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:36.143 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:36.401 { 00:09:36.401 "cntlid": 9, 00:09:36.401 "qid": 0, 00:09:36.401 "state": "enabled", 00:09:36.401 "thread": "nvmf_tgt_poll_group_000", 00:09:36.401 "listen_address": { 00:09:36.401 "trtype": "TCP", 00:09:36.401 "adrfam": "IPv4", 00:09:36.401 "traddr": "10.0.0.2", 00:09:36.401 "trsvcid": "4420" 00:09:36.401 }, 00:09:36.401 "peer_address": { 00:09:36.401 "trtype": "TCP", 00:09:36.401 "adrfam": "IPv4", 00:09:36.401 "traddr": "10.0.0.1", 00:09:36.401 "trsvcid": "51802" 00:09:36.401 }, 00:09:36.401 "auth": { 00:09:36.401 "state": "completed", 00:09:36.401 "digest": "sha256", 00:09:36.401 "dhgroup": "ffdhe2048" 00:09:36.401 } 00:09:36.401 } 00:09:36.401 ]' 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:36.401 16:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:36.401 16:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:36.401 16:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:36.401 16:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:36.401 16:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:36.401 16:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:36.660 16:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:37.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:37.597 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:37.856 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.115 00:09:38.115 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:38.115 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:38.115 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:38.375 { 00:09:38.375 "cntlid": 11, 00:09:38.375 "qid": 0, 00:09:38.375 "state": "enabled", 00:09:38.375 "thread": "nvmf_tgt_poll_group_000", 00:09:38.375 "listen_address": { 00:09:38.375 "trtype": "TCP", 00:09:38.375 "adrfam": "IPv4", 00:09:38.375 "traddr": "10.0.0.2", 00:09:38.375 "trsvcid": "4420" 00:09:38.375 }, 00:09:38.375 "peer_address": { 00:09:38.375 "trtype": "TCP", 00:09:38.375 "adrfam": "IPv4", 00:09:38.375 "traddr": "10.0.0.1", 00:09:38.375 "trsvcid": "51832" 00:09:38.375 }, 00:09:38.375 "auth": { 00:09:38.375 "state": "completed", 00:09:38.375 "digest": "sha256", 00:09:38.375 "dhgroup": "ffdhe2048" 00:09:38.375 } 00:09:38.375 } 00:09:38.375 ]' 00:09:38.375 16:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:38.375 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:38.375 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:38.375 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:38.375 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:38.634 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:38.634 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:38.634 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:38.893 16:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:39.460 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:39.718 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:09:39.718 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:39.718 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:39.718 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:39.718 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:39.718 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:39.719 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:39.719 16:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.719 16:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.719 16:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.719 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:39.719 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.285 00:09:40.285 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:40.285 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:40.285 16:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:40.542 { 00:09:40.542 "cntlid": 13, 00:09:40.542 "qid": 0, 00:09:40.542 "state": "enabled", 00:09:40.542 "thread": "nvmf_tgt_poll_group_000", 00:09:40.542 "listen_address": { 00:09:40.542 "trtype": "TCP", 00:09:40.542 "adrfam": "IPv4", 00:09:40.542 "traddr": "10.0.0.2", 00:09:40.542 "trsvcid": "4420" 00:09:40.542 }, 00:09:40.542 "peer_address": { 00:09:40.542 "trtype": "TCP", 00:09:40.542 "adrfam": "IPv4", 00:09:40.542 "traddr": "10.0.0.1", 00:09:40.542 "trsvcid": "51854" 00:09:40.542 }, 00:09:40.542 "auth": { 00:09:40.542 "state": "completed", 00:09:40.542 "digest": "sha256", 00:09:40.542 "dhgroup": "ffdhe2048" 00:09:40.542 } 00:09:40.542 } 00:09:40.542 ]' 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:40.542 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:40.800 16:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:09:41.366 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:41.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:41.366 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:41.366 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.366 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.624 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.624 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:41.624 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:41.624 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:41.883 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:42.159 00:09:42.159 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:42.159 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:42.159 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:42.418 { 00:09:42.418 "cntlid": 15, 00:09:42.418 "qid": 0, 00:09:42.418 "state": "enabled", 00:09:42.418 "thread": "nvmf_tgt_poll_group_000", 00:09:42.418 "listen_address": { 00:09:42.418 "trtype": "TCP", 00:09:42.418 "adrfam": "IPv4", 00:09:42.418 "traddr": "10.0.0.2", 00:09:42.418 "trsvcid": "4420" 00:09:42.418 }, 00:09:42.418 "peer_address": { 00:09:42.418 "trtype": "TCP", 00:09:42.418 "adrfam": "IPv4", 00:09:42.418 "traddr": "10.0.0.1", 00:09:42.418 "trsvcid": "41390" 00:09:42.418 }, 00:09:42.418 "auth": { 00:09:42.418 "state": "completed", 00:09:42.418 "digest": "sha256", 00:09:42.418 "dhgroup": "ffdhe2048" 00:09:42.418 } 00:09:42.418 } 00:09:42.418 ]' 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:42.418 16:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:42.418 16:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:42.418 16:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:42.418 16:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:42.418 16:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:42.418 16:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:42.676 16:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:43.612 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.179 00:09:44.179 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:44.179 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:44.179 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:44.437 { 00:09:44.437 "cntlid": 17, 00:09:44.437 "qid": 0, 00:09:44.437 "state": "enabled", 00:09:44.437 "thread": "nvmf_tgt_poll_group_000", 00:09:44.437 "listen_address": { 00:09:44.437 "trtype": "TCP", 00:09:44.437 "adrfam": "IPv4", 00:09:44.437 "traddr": "10.0.0.2", 00:09:44.437 "trsvcid": "4420" 00:09:44.437 }, 00:09:44.437 "peer_address": { 00:09:44.437 "trtype": "TCP", 00:09:44.437 "adrfam": "IPv4", 00:09:44.437 "traddr": "10.0.0.1", 00:09:44.437 "trsvcid": "41426" 00:09:44.437 }, 00:09:44.437 "auth": { 00:09:44.437 "state": "completed", 00:09:44.437 "digest": "sha256", 00:09:44.437 "dhgroup": "ffdhe3072" 00:09:44.437 } 00:09:44.437 } 00:09:44.437 ]' 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:44.437 16:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:44.437 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:44.437 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:44.437 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.437 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.437 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:44.696 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:09:45.263 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.263 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:45.263 16:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.264 16:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.264 16:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.264 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:45.264 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:45.264 16:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.831 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.090 00:09:46.090 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:46.090 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:46.090 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:46.350 { 00:09:46.350 "cntlid": 19, 00:09:46.350 "qid": 0, 00:09:46.350 "state": "enabled", 00:09:46.350 "thread": "nvmf_tgt_poll_group_000", 00:09:46.350 "listen_address": { 00:09:46.350 "trtype": "TCP", 00:09:46.350 "adrfam": "IPv4", 00:09:46.350 "traddr": "10.0.0.2", 00:09:46.350 "trsvcid": "4420" 00:09:46.350 }, 00:09:46.350 "peer_address": { 00:09:46.350 "trtype": "TCP", 00:09:46.350 "adrfam": "IPv4", 00:09:46.350 "traddr": "10.0.0.1", 00:09:46.350 "trsvcid": "41456" 00:09:46.350 }, 00:09:46.350 "auth": { 00:09:46.350 "state": "completed", 00:09:46.350 "digest": "sha256", 00:09:46.350 "dhgroup": "ffdhe3072" 00:09:46.350 } 00:09:46.350 } 00:09:46.350 ]' 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.350 16:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:46.350 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:46.350 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:46.350 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.350 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.350 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.917 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:47.486 16:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.745 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.004 00:09:48.004 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:48.004 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:48.004 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.262 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.262 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.262 16:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.262 16:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.262 16:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.262 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:48.262 { 00:09:48.262 "cntlid": 21, 00:09:48.262 "qid": 0, 00:09:48.262 "state": "enabled", 00:09:48.262 "thread": "nvmf_tgt_poll_group_000", 00:09:48.262 "listen_address": { 00:09:48.262 "trtype": "TCP", 00:09:48.262 "adrfam": "IPv4", 00:09:48.262 "traddr": "10.0.0.2", 00:09:48.262 "trsvcid": "4420" 00:09:48.262 }, 00:09:48.262 "peer_address": { 00:09:48.262 "trtype": "TCP", 00:09:48.263 "adrfam": "IPv4", 00:09:48.263 "traddr": "10.0.0.1", 00:09:48.263 "trsvcid": "41488" 00:09:48.263 }, 00:09:48.263 "auth": { 00:09:48.263 "state": "completed", 00:09:48.263 "digest": "sha256", 00:09:48.263 "dhgroup": "ffdhe3072" 00:09:48.263 } 00:09:48.263 } 00:09:48.263 ]' 00:09:48.263 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:48.263 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.263 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:48.263 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:48.263 16:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:48.521 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.521 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.521 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.779 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:49.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:49.358 16:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:49.616 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:49.875 00:09:49.875 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:49.875 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.875 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:50.133 { 00:09:50.133 "cntlid": 23, 00:09:50.133 "qid": 0, 00:09:50.133 "state": "enabled", 00:09:50.133 "thread": "nvmf_tgt_poll_group_000", 00:09:50.133 "listen_address": { 00:09:50.133 "trtype": "TCP", 00:09:50.133 "adrfam": "IPv4", 00:09:50.133 "traddr": "10.0.0.2", 00:09:50.133 "trsvcid": "4420" 00:09:50.133 }, 00:09:50.133 "peer_address": { 00:09:50.133 "trtype": "TCP", 00:09:50.133 "adrfam": "IPv4", 00:09:50.133 "traddr": "10.0.0.1", 00:09:50.133 "trsvcid": "41516" 00:09:50.133 }, 00:09:50.133 "auth": { 00:09:50.133 "state": "completed", 00:09:50.133 "digest": "sha256", 00:09:50.133 "dhgroup": "ffdhe3072" 00:09:50.133 } 00:09:50.133 } 00:09:50.133 ]' 00:09:50.133 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.391 16:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.650 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:09:51.217 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.217 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:51.217 16:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.217 16:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.217 16:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.218 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:51.218 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:51.218 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:51.218 16:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:51.476 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:09:51.476 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:51.476 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:51.476 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:09:51.476 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.477 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.735 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:51.995 { 00:09:51.995 "cntlid": 25, 00:09:51.995 "qid": 0, 00:09:51.995 "state": "enabled", 00:09:51.995 "thread": "nvmf_tgt_poll_group_000", 00:09:51.995 "listen_address": { 00:09:51.995 "trtype": "TCP", 00:09:51.995 "adrfam": "IPv4", 00:09:51.995 "traddr": "10.0.0.2", 00:09:51.995 "trsvcid": "4420" 00:09:51.995 }, 00:09:51.995 "peer_address": { 00:09:51.995 "trtype": "TCP", 00:09:51.995 "adrfam": "IPv4", 00:09:51.995 "traddr": "10.0.0.1", 00:09:51.995 "trsvcid": "42868" 00:09:51.995 }, 00:09:51.995 "auth": { 00:09:51.995 "state": "completed", 00:09:51.995 "digest": "sha256", 00:09:51.995 "dhgroup": "ffdhe4096" 00:09:51.995 } 00:09:51.995 } 00:09:51.995 ]' 00:09:51.995 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.254 16:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.512 16:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:53.080 16:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.339 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.907 00:09:53.907 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:53.907 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:53.907 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:54.166 { 00:09:54.166 "cntlid": 27, 00:09:54.166 "qid": 0, 00:09:54.166 "state": "enabled", 00:09:54.166 "thread": "nvmf_tgt_poll_group_000", 00:09:54.166 "listen_address": { 00:09:54.166 "trtype": "TCP", 00:09:54.166 "adrfam": "IPv4", 00:09:54.166 "traddr": "10.0.0.2", 00:09:54.166 "trsvcid": "4420" 00:09:54.166 }, 00:09:54.166 "peer_address": { 00:09:54.166 "trtype": "TCP", 00:09:54.166 "adrfam": "IPv4", 00:09:54.166 "traddr": "10.0.0.1", 00:09:54.166 "trsvcid": "42908" 00:09:54.166 }, 00:09:54.166 "auth": { 00:09:54.166 "state": "completed", 00:09:54.166 "digest": "sha256", 00:09:54.166 "dhgroup": "ffdhe4096" 00:09:54.166 } 00:09:54.166 } 00:09:54.166 ]' 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.166 16:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.424 16:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:55.362 16:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.620 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.621 16:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.621 16:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.621 16:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.621 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.621 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.879 00:09:55.879 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:55.879 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:55.879 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:56.138 { 00:09:56.138 "cntlid": 29, 00:09:56.138 "qid": 0, 00:09:56.138 "state": "enabled", 00:09:56.138 "thread": "nvmf_tgt_poll_group_000", 00:09:56.138 "listen_address": { 00:09:56.138 "trtype": "TCP", 00:09:56.138 "adrfam": "IPv4", 00:09:56.138 "traddr": "10.0.0.2", 00:09:56.138 "trsvcid": "4420" 00:09:56.138 }, 00:09:56.138 "peer_address": { 00:09:56.138 "trtype": "TCP", 00:09:56.138 "adrfam": "IPv4", 00:09:56.138 "traddr": "10.0.0.1", 00:09:56.138 "trsvcid": "42930" 00:09:56.138 }, 00:09:56.138 "auth": { 00:09:56.138 "state": "completed", 00:09:56.138 "digest": "sha256", 00:09:56.138 "dhgroup": "ffdhe4096" 00:09:56.138 } 00:09:56.138 } 00:09:56.138 ]' 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.138 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:56.397 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:56.397 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:56.397 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.397 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.397 16:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.656 16:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:57.223 16:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:57.482 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:57.740 00:09:57.999 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:57.999 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.000 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:58.314 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.314 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.314 16:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.314 16:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.314 16:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.314 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:58.314 { 00:09:58.314 "cntlid": 31, 00:09:58.314 "qid": 0, 00:09:58.314 "state": "enabled", 00:09:58.314 "thread": "nvmf_tgt_poll_group_000", 00:09:58.314 "listen_address": { 00:09:58.314 "trtype": "TCP", 00:09:58.314 "adrfam": "IPv4", 00:09:58.314 "traddr": "10.0.0.2", 00:09:58.314 "trsvcid": "4420" 00:09:58.314 }, 00:09:58.314 "peer_address": { 00:09:58.314 "trtype": "TCP", 00:09:58.314 "adrfam": "IPv4", 00:09:58.314 "traddr": "10.0.0.1", 00:09:58.314 "trsvcid": "42960" 00:09:58.314 }, 00:09:58.314 "auth": { 00:09:58.314 "state": "completed", 00:09:58.315 "digest": "sha256", 00:09:58.315 "dhgroup": "ffdhe4096" 00:09:58.315 } 00:09:58.315 } 00:09:58.315 ]' 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.315 16:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.589 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:59.523 16:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.523 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.524 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.090 00:10:00.090 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:00.090 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.090 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:00.348 { 00:10:00.348 "cntlid": 33, 00:10:00.348 "qid": 0, 00:10:00.348 "state": "enabled", 00:10:00.348 "thread": "nvmf_tgt_poll_group_000", 00:10:00.348 "listen_address": { 00:10:00.348 "trtype": "TCP", 00:10:00.348 "adrfam": "IPv4", 00:10:00.348 "traddr": "10.0.0.2", 00:10:00.348 "trsvcid": "4420" 00:10:00.348 }, 00:10:00.348 "peer_address": { 00:10:00.348 "trtype": "TCP", 00:10:00.348 "adrfam": "IPv4", 00:10:00.348 "traddr": "10.0.0.1", 00:10:00.348 "trsvcid": "42990" 00:10:00.348 }, 00:10:00.348 "auth": { 00:10:00.348 "state": "completed", 00:10:00.348 "digest": "sha256", 00:10:00.348 "dhgroup": "ffdhe6144" 00:10:00.348 } 00:10:00.348 } 00:10:00.348 ]' 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.348 16:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:00.348 16:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:00.348 16:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:00.607 16:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.607 16:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.607 16:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.865 16:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:01.429 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.687 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.253 00:10:02.253 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:02.253 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:02.253 16:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.511 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.511 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.511 16:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.511 16:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:02.512 { 00:10:02.512 "cntlid": 35, 00:10:02.512 "qid": 0, 00:10:02.512 "state": "enabled", 00:10:02.512 "thread": "nvmf_tgt_poll_group_000", 00:10:02.512 "listen_address": { 00:10:02.512 "trtype": "TCP", 00:10:02.512 "adrfam": "IPv4", 00:10:02.512 "traddr": "10.0.0.2", 00:10:02.512 "trsvcid": "4420" 00:10:02.512 }, 00:10:02.512 "peer_address": { 00:10:02.512 "trtype": "TCP", 00:10:02.512 "adrfam": "IPv4", 00:10:02.512 "traddr": "10.0.0.1", 00:10:02.512 "trsvcid": "32960" 00:10:02.512 }, 00:10:02.512 "auth": { 00:10:02.512 "state": "completed", 00:10:02.512 "digest": "sha256", 00:10:02.512 "dhgroup": "ffdhe6144" 00:10:02.512 } 00:10:02.512 } 00:10:02.512 ]' 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:02.512 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:02.771 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.771 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.771 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.030 16:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:03.598 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.857 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.423 00:10:04.423 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:04.423 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:04.423 16:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:04.682 { 00:10:04.682 "cntlid": 37, 00:10:04.682 "qid": 0, 00:10:04.682 "state": "enabled", 00:10:04.682 "thread": "nvmf_tgt_poll_group_000", 00:10:04.682 "listen_address": { 00:10:04.682 "trtype": "TCP", 00:10:04.682 "adrfam": "IPv4", 00:10:04.682 "traddr": "10.0.0.2", 00:10:04.682 "trsvcid": "4420" 00:10:04.682 }, 00:10:04.682 "peer_address": { 00:10:04.682 "trtype": "TCP", 00:10:04.682 "adrfam": "IPv4", 00:10:04.682 "traddr": "10.0.0.1", 00:10:04.682 "trsvcid": "32990" 00:10:04.682 }, 00:10:04.682 "auth": { 00:10:04.682 "state": "completed", 00:10:04.682 "digest": "sha256", 00:10:04.682 "dhgroup": "ffdhe6144" 00:10:04.682 } 00:10:04.682 } 00:10:04.682 ]' 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.682 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.248 16:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:05.814 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.814 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:05.814 16:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.814 16:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.814 16:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.814 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:05.815 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:05.815 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:06.073 16:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:06.641 00:10:06.641 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:06.641 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:06.641 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:06.898 { 00:10:06.898 "cntlid": 39, 00:10:06.898 "qid": 0, 00:10:06.898 "state": "enabled", 00:10:06.898 "thread": "nvmf_tgt_poll_group_000", 00:10:06.898 "listen_address": { 00:10:06.898 "trtype": "TCP", 00:10:06.898 "adrfam": "IPv4", 00:10:06.898 "traddr": "10.0.0.2", 00:10:06.898 "trsvcid": "4420" 00:10:06.898 }, 00:10:06.898 "peer_address": { 00:10:06.898 "trtype": "TCP", 00:10:06.898 "adrfam": "IPv4", 00:10:06.898 "traddr": "10.0.0.1", 00:10:06.898 "trsvcid": "33008" 00:10:06.898 }, 00:10:06.898 "auth": { 00:10:06.898 "state": "completed", 00:10:06.898 "digest": "sha256", 00:10:06.898 "dhgroup": "ffdhe6144" 00:10:06.898 } 00:10:06.898 } 00:10:06.898 ]' 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.898 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.156 16:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:08.091 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.349 16:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.916 00:10:08.916 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:08.916 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.916 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:09.174 { 00:10:09.174 "cntlid": 41, 00:10:09.174 "qid": 0, 00:10:09.174 "state": "enabled", 00:10:09.174 "thread": "nvmf_tgt_poll_group_000", 00:10:09.174 "listen_address": { 00:10:09.174 "trtype": "TCP", 00:10:09.174 "adrfam": "IPv4", 00:10:09.174 "traddr": "10.0.0.2", 00:10:09.174 "trsvcid": "4420" 00:10:09.174 }, 00:10:09.174 "peer_address": { 00:10:09.174 "trtype": "TCP", 00:10:09.174 "adrfam": "IPv4", 00:10:09.174 "traddr": "10.0.0.1", 00:10:09.174 "trsvcid": "33036" 00:10:09.174 }, 00:10:09.174 "auth": { 00:10:09.174 "state": "completed", 00:10:09.174 "digest": "sha256", 00:10:09.174 "dhgroup": "ffdhe8192" 00:10:09.174 } 00:10:09.174 } 00:10:09.174 ]' 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.174 16:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.739 16:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:10.305 16:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.563 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.129 00:10:11.129 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:11.129 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.129 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:11.387 { 00:10:11.387 "cntlid": 43, 00:10:11.387 "qid": 0, 00:10:11.387 "state": "enabled", 00:10:11.387 "thread": "nvmf_tgt_poll_group_000", 00:10:11.387 "listen_address": { 00:10:11.387 "trtype": "TCP", 00:10:11.387 "adrfam": "IPv4", 00:10:11.387 "traddr": "10.0.0.2", 00:10:11.387 "trsvcid": "4420" 00:10:11.387 }, 00:10:11.387 "peer_address": { 00:10:11.387 "trtype": "TCP", 00:10:11.387 "adrfam": "IPv4", 00:10:11.387 "traddr": "10.0.0.1", 00:10:11.387 "trsvcid": "37534" 00:10:11.387 }, 00:10:11.387 "auth": { 00:10:11.387 "state": "completed", 00:10:11.387 "digest": "sha256", 00:10:11.387 "dhgroup": "ffdhe8192" 00:10:11.387 } 00:10:11.387 } 00:10:11.387 ]' 00:10:11.387 16:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:11.387 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.387 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:11.387 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:11.387 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:11.646 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.646 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.646 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.904 16:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:12.471 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.730 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.295 00:10:13.295 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:13.295 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:13.295 16:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:13.553 { 00:10:13.553 "cntlid": 45, 00:10:13.553 "qid": 0, 00:10:13.553 "state": "enabled", 00:10:13.553 "thread": "nvmf_tgt_poll_group_000", 00:10:13.553 "listen_address": { 00:10:13.553 "trtype": "TCP", 00:10:13.553 "adrfam": "IPv4", 00:10:13.553 "traddr": "10.0.0.2", 00:10:13.553 "trsvcid": "4420" 00:10:13.553 }, 00:10:13.553 "peer_address": { 00:10:13.553 "trtype": "TCP", 00:10:13.553 "adrfam": "IPv4", 00:10:13.553 "traddr": "10.0.0.1", 00:10:13.553 "trsvcid": "37574" 00:10:13.553 }, 00:10:13.553 "auth": { 00:10:13.553 "state": "completed", 00:10:13.553 "digest": "sha256", 00:10:13.553 "dhgroup": "ffdhe8192" 00:10:13.553 } 00:10:13.553 } 00:10:13.553 ]' 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:13.553 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:13.811 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.811 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.811 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.069 16:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:14.636 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:14.895 16:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:15.462 00:10:15.462 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:15.462 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:15.462 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:15.740 { 00:10:15.740 "cntlid": 47, 00:10:15.740 "qid": 0, 00:10:15.740 "state": "enabled", 00:10:15.740 "thread": "nvmf_tgt_poll_group_000", 00:10:15.740 "listen_address": { 00:10:15.740 "trtype": "TCP", 00:10:15.740 "adrfam": "IPv4", 00:10:15.740 "traddr": "10.0.0.2", 00:10:15.740 "trsvcid": "4420" 00:10:15.740 }, 00:10:15.740 "peer_address": { 00:10:15.740 "trtype": "TCP", 00:10:15.740 "adrfam": "IPv4", 00:10:15.740 "traddr": "10.0.0.1", 00:10:15.740 "trsvcid": "37604" 00:10:15.740 }, 00:10:15.740 "auth": { 00:10:15.740 "state": "completed", 00:10:15.740 "digest": "sha256", 00:10:15.740 "dhgroup": "ffdhe8192" 00:10:15.740 } 00:10:15.740 } 00:10:15.740 ]' 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:15.740 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.029 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.029 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.029 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.029 16:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:16.598 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.857 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.858 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.116 00:10:17.116 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:17.116 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:17.116 16:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.374 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.375 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.375 16:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.375 16:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 16:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.375 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:17.375 { 00:10:17.375 "cntlid": 49, 00:10:17.375 "qid": 0, 00:10:17.375 "state": "enabled", 00:10:17.375 "thread": "nvmf_tgt_poll_group_000", 00:10:17.375 "listen_address": { 00:10:17.375 "trtype": "TCP", 00:10:17.375 "adrfam": "IPv4", 00:10:17.375 "traddr": "10.0.0.2", 00:10:17.375 "trsvcid": "4420" 00:10:17.375 }, 00:10:17.375 "peer_address": { 00:10:17.375 "trtype": "TCP", 00:10:17.375 "adrfam": "IPv4", 00:10:17.375 "traddr": "10.0.0.1", 00:10:17.375 "trsvcid": "37624" 00:10:17.375 }, 00:10:17.375 "auth": { 00:10:17.375 "state": "completed", 00:10:17.375 "digest": "sha384", 00:10:17.375 "dhgroup": "null" 00:10:17.375 } 00:10:17.375 } 00:10:17.375 ]' 00:10:17.375 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.633 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.891 16:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:18.458 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.718 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.977 00:10:18.977 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.977 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.977 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:19.236 { 00:10:19.236 "cntlid": 51, 00:10:19.236 "qid": 0, 00:10:19.236 "state": "enabled", 00:10:19.236 "thread": "nvmf_tgt_poll_group_000", 00:10:19.236 "listen_address": { 00:10:19.236 "trtype": "TCP", 00:10:19.236 "adrfam": "IPv4", 00:10:19.236 "traddr": "10.0.0.2", 00:10:19.236 "trsvcid": "4420" 00:10:19.236 }, 00:10:19.236 "peer_address": { 00:10:19.236 "trtype": "TCP", 00:10:19.236 "adrfam": "IPv4", 00:10:19.236 "traddr": "10.0.0.1", 00:10:19.236 "trsvcid": "37664" 00:10:19.236 }, 00:10:19.236 "auth": { 00:10:19.236 "state": "completed", 00:10:19.236 "digest": "sha384", 00:10:19.236 "dhgroup": "null" 00:10:19.236 } 00:10:19.236 } 00:10:19.236 ]' 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:19.236 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:19.495 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.495 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.495 16:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.495 16:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:20.060 16:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:20.319 16:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:20.578 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:20.578 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:20.578 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:20.578 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:20.578 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:20.578 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.579 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.579 16:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.579 16:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.579 16:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.579 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.579 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.860 00:10:20.860 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.860 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:20.860 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.119 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.119 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.119 16:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.119 16:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.119 16:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.119 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:21.119 { 00:10:21.119 "cntlid": 53, 00:10:21.119 "qid": 0, 00:10:21.119 "state": "enabled", 00:10:21.119 "thread": "nvmf_tgt_poll_group_000", 00:10:21.119 "listen_address": { 00:10:21.119 "trtype": "TCP", 00:10:21.119 "adrfam": "IPv4", 00:10:21.119 "traddr": "10.0.0.2", 00:10:21.119 "trsvcid": "4420" 00:10:21.119 }, 00:10:21.119 "peer_address": { 00:10:21.120 "trtype": "TCP", 00:10:21.120 "adrfam": "IPv4", 00:10:21.120 "traddr": "10.0.0.1", 00:10:21.120 "trsvcid": "59728" 00:10:21.120 }, 00:10:21.120 "auth": { 00:10:21.120 "state": "completed", 00:10:21.120 "digest": "sha384", 00:10:21.120 "dhgroup": "null" 00:10:21.120 } 00:10:21.120 } 00:10:21.120 ]' 00:10:21.120 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:21.120 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:21.120 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:21.120 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:21.120 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:21.378 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.379 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.379 16:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.637 16:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:22.205 16:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:22.464 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:22.723 00:10:22.723 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.723 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.723 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.982 { 00:10:22.982 "cntlid": 55, 00:10:22.982 "qid": 0, 00:10:22.982 "state": "enabled", 00:10:22.982 "thread": "nvmf_tgt_poll_group_000", 00:10:22.982 "listen_address": { 00:10:22.982 "trtype": "TCP", 00:10:22.982 "adrfam": "IPv4", 00:10:22.982 "traddr": "10.0.0.2", 00:10:22.982 "trsvcid": "4420" 00:10:22.982 }, 00:10:22.982 "peer_address": { 00:10:22.982 "trtype": "TCP", 00:10:22.982 "adrfam": "IPv4", 00:10:22.982 "traddr": "10.0.0.1", 00:10:22.982 "trsvcid": "59748" 00:10:22.982 }, 00:10:22.982 "auth": { 00:10:22.982 "state": "completed", 00:10:22.982 "digest": "sha384", 00:10:22.982 "dhgroup": "null" 00:10:22.982 } 00:10:22.982 } 00:10:22.982 ]' 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:22.982 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.241 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.241 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.241 16:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.500 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:24.067 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.326 16:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.585 00:10:24.585 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.585 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.585 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.844 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.844 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.844 16:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.844 16:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 16:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.844 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.844 { 00:10:24.844 "cntlid": 57, 00:10:24.844 "qid": 0, 00:10:24.844 "state": "enabled", 00:10:24.844 "thread": "nvmf_tgt_poll_group_000", 00:10:24.844 "listen_address": { 00:10:24.844 "trtype": "TCP", 00:10:24.844 "adrfam": "IPv4", 00:10:24.845 "traddr": "10.0.0.2", 00:10:24.845 "trsvcid": "4420" 00:10:24.845 }, 00:10:24.845 "peer_address": { 00:10:24.845 "trtype": "TCP", 00:10:24.845 "adrfam": "IPv4", 00:10:24.845 "traddr": "10.0.0.1", 00:10:24.845 "trsvcid": "59780" 00:10:24.845 }, 00:10:24.845 "auth": { 00:10:24.845 "state": "completed", 00:10:24.845 "digest": "sha384", 00:10:24.845 "dhgroup": "ffdhe2048" 00:10:24.845 } 00:10:24.845 } 00:10:24.845 ]' 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.845 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.412 16:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:25.980 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:26.238 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:10:26.238 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:26.238 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.239 16:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.498 00:10:26.498 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.498 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.498 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:26.757 { 00:10:26.757 "cntlid": 59, 00:10:26.757 "qid": 0, 00:10:26.757 "state": "enabled", 00:10:26.757 "thread": "nvmf_tgt_poll_group_000", 00:10:26.757 "listen_address": { 00:10:26.757 "trtype": "TCP", 00:10:26.757 "adrfam": "IPv4", 00:10:26.757 "traddr": "10.0.0.2", 00:10:26.757 "trsvcid": "4420" 00:10:26.757 }, 00:10:26.757 "peer_address": { 00:10:26.757 "trtype": "TCP", 00:10:26.757 "adrfam": "IPv4", 00:10:26.757 "traddr": "10.0.0.1", 00:10:26.757 "trsvcid": "59818" 00:10:26.757 }, 00:10:26.757 "auth": { 00:10:26.757 "state": "completed", 00:10:26.757 "digest": "sha384", 00:10:26.757 "dhgroup": "ffdhe2048" 00:10:26.757 } 00:10:26.757 } 00:10:26.757 ]' 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:26.757 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.018 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:27.018 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.018 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.018 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.018 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.275 16:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:27.842 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.842 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:27.842 16:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.842 16:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.842 16:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.842 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.843 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:27.843 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.102 16:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.361 00:10:28.361 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.361 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.361 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.620 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.620 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.620 16:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.620 16:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.620 16:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.621 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.621 { 00:10:28.621 "cntlid": 61, 00:10:28.621 "qid": 0, 00:10:28.621 "state": "enabled", 00:10:28.621 "thread": "nvmf_tgt_poll_group_000", 00:10:28.621 "listen_address": { 00:10:28.621 "trtype": "TCP", 00:10:28.621 "adrfam": "IPv4", 00:10:28.621 "traddr": "10.0.0.2", 00:10:28.621 "trsvcid": "4420" 00:10:28.621 }, 00:10:28.621 "peer_address": { 00:10:28.621 "trtype": "TCP", 00:10:28.621 "adrfam": "IPv4", 00:10:28.621 "traddr": "10.0.0.1", 00:10:28.621 "trsvcid": "59844" 00:10:28.621 }, 00:10:28.621 "auth": { 00:10:28.621 "state": "completed", 00:10:28.621 "digest": "sha384", 00:10:28.621 "dhgroup": "ffdhe2048" 00:10:28.621 } 00:10:28.621 } 00:10:28.621 ]' 00:10:28.621 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.880 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.139 16:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:29.705 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.706 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:29.706 16:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.706 16:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.964 16:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.964 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.964 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:29.964 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:30.222 16:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:30.481 00:10:30.481 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.481 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.481 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.740 { 00:10:30.740 "cntlid": 63, 00:10:30.740 "qid": 0, 00:10:30.740 "state": "enabled", 00:10:30.740 "thread": "nvmf_tgt_poll_group_000", 00:10:30.740 "listen_address": { 00:10:30.740 "trtype": "TCP", 00:10:30.740 "adrfam": "IPv4", 00:10:30.740 "traddr": "10.0.0.2", 00:10:30.740 "trsvcid": "4420" 00:10:30.740 }, 00:10:30.740 "peer_address": { 00:10:30.740 "trtype": "TCP", 00:10:30.740 "adrfam": "IPv4", 00:10:30.740 "traddr": "10.0.0.1", 00:10:30.740 "trsvcid": "46246" 00:10:30.740 }, 00:10:30.740 "auth": { 00:10:30.740 "state": "completed", 00:10:30.740 "digest": "sha384", 00:10:30.740 "dhgroup": "ffdhe2048" 00:10:30.740 } 00:10:30.740 } 00:10:30.740 ]' 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.740 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.999 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.999 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.999 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.999 16:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.934 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.501 00:10:32.501 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.501 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.501 16:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.760 { 00:10:32.760 "cntlid": 65, 00:10:32.760 "qid": 0, 00:10:32.760 "state": "enabled", 00:10:32.760 "thread": "nvmf_tgt_poll_group_000", 00:10:32.760 "listen_address": { 00:10:32.760 "trtype": "TCP", 00:10:32.760 "adrfam": "IPv4", 00:10:32.760 "traddr": "10.0.0.2", 00:10:32.760 "trsvcid": "4420" 00:10:32.760 }, 00:10:32.760 "peer_address": { 00:10:32.760 "trtype": "TCP", 00:10:32.760 "adrfam": "IPv4", 00:10:32.760 "traddr": "10.0.0.1", 00:10:32.760 "trsvcid": "46260" 00:10:32.760 }, 00:10:32.760 "auth": { 00:10:32.760 "state": "completed", 00:10:32.760 "digest": "sha384", 00:10:32.760 "dhgroup": "ffdhe3072" 00:10:32.760 } 00:10:32.760 } 00:10:32.760 ]' 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.760 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.018 16:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:33.585 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.844 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.411 00:10:34.411 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.411 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.411 16:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.411 { 00:10:34.411 "cntlid": 67, 00:10:34.411 "qid": 0, 00:10:34.411 "state": "enabled", 00:10:34.411 "thread": "nvmf_tgt_poll_group_000", 00:10:34.411 "listen_address": { 00:10:34.411 "trtype": "TCP", 00:10:34.411 "adrfam": "IPv4", 00:10:34.411 "traddr": "10.0.0.2", 00:10:34.411 "trsvcid": "4420" 00:10:34.411 }, 00:10:34.411 "peer_address": { 00:10:34.411 "trtype": "TCP", 00:10:34.411 "adrfam": "IPv4", 00:10:34.411 "traddr": "10.0.0.1", 00:10:34.411 "trsvcid": "46286" 00:10:34.411 }, 00:10:34.411 "auth": { 00:10:34.411 "state": "completed", 00:10:34.411 "digest": "sha384", 00:10:34.411 "dhgroup": "ffdhe3072" 00:10:34.411 } 00:10:34.411 } 00:10:34.411 ]' 00:10:34.411 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.681 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.954 16:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:35.521 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.779 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.780 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.780 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.780 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.038 00:10:36.038 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.038 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.038 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.296 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.297 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.297 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.297 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.297 16:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.297 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.297 { 00:10:36.297 "cntlid": 69, 00:10:36.297 "qid": 0, 00:10:36.297 "state": "enabled", 00:10:36.297 "thread": "nvmf_tgt_poll_group_000", 00:10:36.297 "listen_address": { 00:10:36.297 "trtype": "TCP", 00:10:36.297 "adrfam": "IPv4", 00:10:36.297 "traddr": "10.0.0.2", 00:10:36.297 "trsvcid": "4420" 00:10:36.297 }, 00:10:36.297 "peer_address": { 00:10:36.297 "trtype": "TCP", 00:10:36.297 "adrfam": "IPv4", 00:10:36.297 "traddr": "10.0.0.1", 00:10:36.297 "trsvcid": "46310" 00:10:36.297 }, 00:10:36.297 "auth": { 00:10:36.297 "state": "completed", 00:10:36.297 "digest": "sha384", 00:10:36.297 "dhgroup": "ffdhe3072" 00:10:36.297 } 00:10:36.297 } 00:10:36.297 ]' 00:10:36.297 16:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.555 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.814 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:37.381 16:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.639 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.898 00:10:37.898 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.898 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.898 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.156 { 00:10:38.156 "cntlid": 71, 00:10:38.156 "qid": 0, 00:10:38.156 "state": "enabled", 00:10:38.156 "thread": "nvmf_tgt_poll_group_000", 00:10:38.156 "listen_address": { 00:10:38.156 "trtype": "TCP", 00:10:38.156 "adrfam": "IPv4", 00:10:38.156 "traddr": "10.0.0.2", 00:10:38.156 "trsvcid": "4420" 00:10:38.156 }, 00:10:38.156 "peer_address": { 00:10:38.156 "trtype": "TCP", 00:10:38.156 "adrfam": "IPv4", 00:10:38.156 "traddr": "10.0.0.1", 00:10:38.156 "trsvcid": "46350" 00:10:38.156 }, 00:10:38.156 "auth": { 00:10:38.156 "state": "completed", 00:10:38.156 "digest": "sha384", 00:10:38.156 "dhgroup": "ffdhe3072" 00:10:38.156 } 00:10:38.156 } 00:10:38.156 ]' 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:38.156 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.414 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:38.414 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.414 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.414 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.414 16:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.673 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:39.239 16:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.497 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.755 00:10:39.755 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.755 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.755 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.012 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.012 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.012 16:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.012 16:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.012 16:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.012 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.012 { 00:10:40.012 "cntlid": 73, 00:10:40.012 "qid": 0, 00:10:40.012 "state": "enabled", 00:10:40.012 "thread": "nvmf_tgt_poll_group_000", 00:10:40.012 "listen_address": { 00:10:40.012 "trtype": "TCP", 00:10:40.012 "adrfam": "IPv4", 00:10:40.012 "traddr": "10.0.0.2", 00:10:40.012 "trsvcid": "4420" 00:10:40.012 }, 00:10:40.012 "peer_address": { 00:10:40.012 "trtype": "TCP", 00:10:40.012 "adrfam": "IPv4", 00:10:40.012 "traddr": "10.0.0.1", 00:10:40.012 "trsvcid": "46376" 00:10:40.012 }, 00:10:40.012 "auth": { 00:10:40.012 "state": "completed", 00:10:40.012 "digest": "sha384", 00:10:40.013 "dhgroup": "ffdhe4096" 00:10:40.013 } 00:10:40.013 } 00:10:40.013 ]' 00:10:40.013 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.013 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:40.013 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.270 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.270 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.270 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.270 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.270 16:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.528 16:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:41.094 16:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:41.352 16:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.611 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.869 00:10:41.869 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.869 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.869 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.127 { 00:10:42.127 "cntlid": 75, 00:10:42.127 "qid": 0, 00:10:42.127 "state": "enabled", 00:10:42.127 "thread": "nvmf_tgt_poll_group_000", 00:10:42.127 "listen_address": { 00:10:42.127 "trtype": "TCP", 00:10:42.127 "adrfam": "IPv4", 00:10:42.127 "traddr": "10.0.0.2", 00:10:42.127 "trsvcid": "4420" 00:10:42.127 }, 00:10:42.127 "peer_address": { 00:10:42.127 "trtype": "TCP", 00:10:42.127 "adrfam": "IPv4", 00:10:42.127 "traddr": "10.0.0.1", 00:10:42.127 "trsvcid": "46662" 00:10:42.127 }, 00:10:42.127 "auth": { 00:10:42.127 "state": "completed", 00:10:42.127 "digest": "sha384", 00:10:42.127 "dhgroup": "ffdhe4096" 00:10:42.127 } 00:10:42.127 } 00:10:42.127 ]' 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.127 16:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.694 16:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:43.261 16:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.519 16:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.520 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.520 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.778 00:10:43.778 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.778 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.778 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.037 { 00:10:44.037 "cntlid": 77, 00:10:44.037 "qid": 0, 00:10:44.037 "state": "enabled", 00:10:44.037 "thread": "nvmf_tgt_poll_group_000", 00:10:44.037 "listen_address": { 00:10:44.037 "trtype": "TCP", 00:10:44.037 "adrfam": "IPv4", 00:10:44.037 "traddr": "10.0.0.2", 00:10:44.037 "trsvcid": "4420" 00:10:44.037 }, 00:10:44.037 "peer_address": { 00:10:44.037 "trtype": "TCP", 00:10:44.037 "adrfam": "IPv4", 00:10:44.037 "traddr": "10.0.0.1", 00:10:44.037 "trsvcid": "46682" 00:10:44.037 }, 00:10:44.037 "auth": { 00:10:44.037 "state": "completed", 00:10:44.037 "digest": "sha384", 00:10:44.037 "dhgroup": "ffdhe4096" 00:10:44.037 } 00:10:44.037 } 00:10:44.037 ]' 00:10:44.037 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.295 16:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.554 16:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:45.489 16:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:45.489 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.055 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.055 { 00:10:46.055 "cntlid": 79, 00:10:46.055 "qid": 0, 00:10:46.055 "state": "enabled", 00:10:46.055 "thread": "nvmf_tgt_poll_group_000", 00:10:46.055 "listen_address": { 00:10:46.055 "trtype": "TCP", 00:10:46.055 "adrfam": "IPv4", 00:10:46.055 "traddr": "10.0.0.2", 00:10:46.055 "trsvcid": "4420" 00:10:46.055 }, 00:10:46.055 "peer_address": { 00:10:46.055 "trtype": "TCP", 00:10:46.055 "adrfam": "IPv4", 00:10:46.055 "traddr": "10.0.0.1", 00:10:46.055 "trsvcid": "46716" 00:10:46.055 }, 00:10:46.055 "auth": { 00:10:46.055 "state": "completed", 00:10:46.055 "digest": "sha384", 00:10:46.055 "dhgroup": "ffdhe4096" 00:10:46.055 } 00:10:46.055 } 00:10:46.055 ]' 00:10:46.055 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.313 16:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.572 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:47.138 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.138 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:47.138 16:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.138 16:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.139 16:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.139 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:47.139 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.139 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:47.139 16:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.397 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.398 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.965 00:10:47.965 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.965 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.965 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.223 { 00:10:48.223 "cntlid": 81, 00:10:48.223 "qid": 0, 00:10:48.223 "state": "enabled", 00:10:48.223 "thread": "nvmf_tgt_poll_group_000", 00:10:48.223 "listen_address": { 00:10:48.223 "trtype": "TCP", 00:10:48.223 "adrfam": "IPv4", 00:10:48.223 "traddr": "10.0.0.2", 00:10:48.223 "trsvcid": "4420" 00:10:48.223 }, 00:10:48.223 "peer_address": { 00:10:48.223 "trtype": "TCP", 00:10:48.223 "adrfam": "IPv4", 00:10:48.223 "traddr": "10.0.0.1", 00:10:48.223 "trsvcid": "46744" 00:10:48.223 }, 00:10:48.223 "auth": { 00:10:48.223 "state": "completed", 00:10:48.223 "digest": "sha384", 00:10:48.223 "dhgroup": "ffdhe6144" 00:10:48.223 } 00:10:48.223 } 00:10:48.223 ]' 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.223 16:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.482 16:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:49.417 16:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.417 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.984 00:10:49.984 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.984 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.984 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.243 { 00:10:50.243 "cntlid": 83, 00:10:50.243 "qid": 0, 00:10:50.243 "state": "enabled", 00:10:50.243 "thread": "nvmf_tgt_poll_group_000", 00:10:50.243 "listen_address": { 00:10:50.243 "trtype": "TCP", 00:10:50.243 "adrfam": "IPv4", 00:10:50.243 "traddr": "10.0.0.2", 00:10:50.243 "trsvcid": "4420" 00:10:50.243 }, 00:10:50.243 "peer_address": { 00:10:50.243 "trtype": "TCP", 00:10:50.243 "adrfam": "IPv4", 00:10:50.243 "traddr": "10.0.0.1", 00:10:50.243 "trsvcid": "46766" 00:10:50.243 }, 00:10:50.243 "auth": { 00:10:50.243 "state": "completed", 00:10:50.243 "digest": "sha384", 00:10:50.243 "dhgroup": "ffdhe6144" 00:10:50.243 } 00:10:50.243 } 00:10:50.243 ]' 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.243 16:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.539 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:50.539 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.539 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.539 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.539 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.830 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:51.398 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.398 16:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:51.398 16:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.398 16:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.398 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.398 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.398 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:51.398 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.657 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.225 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.225 { 00:10:52.225 "cntlid": 85, 00:10:52.225 "qid": 0, 00:10:52.225 "state": "enabled", 00:10:52.225 "thread": "nvmf_tgt_poll_group_000", 00:10:52.225 "listen_address": { 00:10:52.225 "trtype": "TCP", 00:10:52.225 "adrfam": "IPv4", 00:10:52.225 "traddr": "10.0.0.2", 00:10:52.225 "trsvcid": "4420" 00:10:52.225 }, 00:10:52.225 "peer_address": { 00:10:52.225 "trtype": "TCP", 00:10:52.225 "adrfam": "IPv4", 00:10:52.225 "traddr": "10.0.0.1", 00:10:52.225 "trsvcid": "46684" 00:10:52.225 }, 00:10:52.225 "auth": { 00:10:52.225 "state": "completed", 00:10:52.225 "digest": "sha384", 00:10:52.225 "dhgroup": "ffdhe6144" 00:10:52.225 } 00:10:52.225 } 00:10:52.225 ]' 00:10:52.225 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.484 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.484 16:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.484 16:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:52.484 16:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.484 16:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.484 16:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.484 16:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.743 16:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:53.310 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:53.569 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.136 00:10:54.136 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.136 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.136 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.395 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.395 16:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.395 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.395 16:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.395 { 00:10:54.395 "cntlid": 87, 00:10:54.395 "qid": 0, 00:10:54.395 "state": "enabled", 00:10:54.395 "thread": "nvmf_tgt_poll_group_000", 00:10:54.395 "listen_address": { 00:10:54.395 "trtype": "TCP", 00:10:54.395 "adrfam": "IPv4", 00:10:54.395 "traddr": "10.0.0.2", 00:10:54.395 "trsvcid": "4420" 00:10:54.395 }, 00:10:54.395 "peer_address": { 00:10:54.395 "trtype": "TCP", 00:10:54.395 "adrfam": "IPv4", 00:10:54.395 "traddr": "10.0.0.1", 00:10:54.395 "trsvcid": "46706" 00:10:54.395 }, 00:10:54.395 "auth": { 00:10:54.395 "state": "completed", 00:10:54.395 "digest": "sha384", 00:10:54.395 "dhgroup": "ffdhe6144" 00:10:54.395 } 00:10:54.395 } 00:10:54.395 ]' 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:54.395 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.654 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.654 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.654 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.913 16:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:55.479 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.738 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.306 00:10:56.306 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.306 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.306 16:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.564 { 00:10:56.564 "cntlid": 89, 00:10:56.564 "qid": 0, 00:10:56.564 "state": "enabled", 00:10:56.564 "thread": "nvmf_tgt_poll_group_000", 00:10:56.564 "listen_address": { 00:10:56.564 "trtype": "TCP", 00:10:56.564 "adrfam": "IPv4", 00:10:56.564 "traddr": "10.0.0.2", 00:10:56.564 "trsvcid": "4420" 00:10:56.564 }, 00:10:56.564 "peer_address": { 00:10:56.564 "trtype": "TCP", 00:10:56.564 "adrfam": "IPv4", 00:10:56.564 "traddr": "10.0.0.1", 00:10:56.564 "trsvcid": "46738" 00:10:56.564 }, 00:10:56.564 "auth": { 00:10:56.564 "state": "completed", 00:10:56.564 "digest": "sha384", 00:10:56.564 "dhgroup": "ffdhe8192" 00:10:56.564 } 00:10:56.564 } 00:10:56.564 ]' 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.564 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.823 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.823 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.823 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.823 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.082 16:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:57.650 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.909 16:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.476 00:10:58.476 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.476 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.476 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.735 { 00:10:58.735 "cntlid": 91, 00:10:58.735 "qid": 0, 00:10:58.735 "state": "enabled", 00:10:58.735 "thread": "nvmf_tgt_poll_group_000", 00:10:58.735 "listen_address": { 00:10:58.735 "trtype": "TCP", 00:10:58.735 "adrfam": "IPv4", 00:10:58.735 "traddr": "10.0.0.2", 00:10:58.735 "trsvcid": "4420" 00:10:58.735 }, 00:10:58.735 "peer_address": { 00:10:58.735 "trtype": "TCP", 00:10:58.735 "adrfam": "IPv4", 00:10:58.735 "traddr": "10.0.0.1", 00:10:58.735 "trsvcid": "46760" 00:10:58.735 }, 00:10:58.735 "auth": { 00:10:58.735 "state": "completed", 00:10:58.735 "digest": "sha384", 00:10:58.735 "dhgroup": "ffdhe8192" 00:10:58.735 } 00:10:58.735 } 00:10:58.735 ]' 00:10:58.735 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.993 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.252 16:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:10:59.816 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.816 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:10:59.816 16:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.816 16:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.075 16:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.075 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.075 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:00.075 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.332 16:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.896 00:11:00.896 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.896 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.896 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.154 { 00:11:01.154 "cntlid": 93, 00:11:01.154 "qid": 0, 00:11:01.154 "state": "enabled", 00:11:01.154 "thread": "nvmf_tgt_poll_group_000", 00:11:01.154 "listen_address": { 00:11:01.154 "trtype": "TCP", 00:11:01.154 "adrfam": "IPv4", 00:11:01.154 "traddr": "10.0.0.2", 00:11:01.154 "trsvcid": "4420" 00:11:01.154 }, 00:11:01.154 "peer_address": { 00:11:01.154 "trtype": "TCP", 00:11:01.154 "adrfam": "IPv4", 00:11:01.154 "traddr": "10.0.0.1", 00:11:01.154 "trsvcid": "57498" 00:11:01.154 }, 00:11:01.154 "auth": { 00:11:01.154 "state": "completed", 00:11:01.154 "digest": "sha384", 00:11:01.154 "dhgroup": "ffdhe8192" 00:11:01.154 } 00:11:01.154 } 00:11:01.154 ]' 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.154 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.155 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.155 16:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.412 16:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:02.344 16:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.601 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:03.165 00:11:03.166 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.166 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.166 16:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.423 { 00:11:03.423 "cntlid": 95, 00:11:03.423 "qid": 0, 00:11:03.423 "state": "enabled", 00:11:03.423 "thread": "nvmf_tgt_poll_group_000", 00:11:03.423 "listen_address": { 00:11:03.423 "trtype": "TCP", 00:11:03.423 "adrfam": "IPv4", 00:11:03.423 "traddr": "10.0.0.2", 00:11:03.423 "trsvcid": "4420" 00:11:03.423 }, 00:11:03.423 "peer_address": { 00:11:03.423 "trtype": "TCP", 00:11:03.423 "adrfam": "IPv4", 00:11:03.423 "traddr": "10.0.0.1", 00:11:03.423 "trsvcid": "57540" 00:11:03.423 }, 00:11:03.423 "auth": { 00:11:03.423 "state": "completed", 00:11:03.423 "digest": "sha384", 00:11:03.423 "dhgroup": "ffdhe8192" 00:11:03.423 } 00:11:03.423 } 00:11:03.423 ]' 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.423 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.684 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:03.684 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.685 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.685 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.685 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.943 16:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:04.507 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.764 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.329 00:11:05.329 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.329 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.329 16:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.586 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.586 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.586 16:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.586 16:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.586 16:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.586 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.586 { 00:11:05.586 "cntlid": 97, 00:11:05.586 "qid": 0, 00:11:05.586 "state": "enabled", 00:11:05.586 "thread": "nvmf_tgt_poll_group_000", 00:11:05.586 "listen_address": { 00:11:05.586 "trtype": "TCP", 00:11:05.586 "adrfam": "IPv4", 00:11:05.586 "traddr": "10.0.0.2", 00:11:05.586 "trsvcid": "4420" 00:11:05.586 }, 00:11:05.586 "peer_address": { 00:11:05.586 "trtype": "TCP", 00:11:05.586 "adrfam": "IPv4", 00:11:05.586 "traddr": "10.0.0.1", 00:11:05.586 "trsvcid": "57564" 00:11:05.586 }, 00:11:05.586 "auth": { 00:11:05.586 "state": "completed", 00:11:05.587 "digest": "sha512", 00:11:05.587 "dhgroup": "null" 00:11:05.587 } 00:11:05.587 } 00:11:05.587 ]' 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.587 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.151 16:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:06.715 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:06.716 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.974 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.293 00:11:07.293 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.293 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.293 16:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.565 { 00:11:07.565 "cntlid": 99, 00:11:07.565 "qid": 0, 00:11:07.565 "state": "enabled", 00:11:07.565 "thread": "nvmf_tgt_poll_group_000", 00:11:07.565 "listen_address": { 00:11:07.565 "trtype": "TCP", 00:11:07.565 "adrfam": "IPv4", 00:11:07.565 "traddr": "10.0.0.2", 00:11:07.565 "trsvcid": "4420" 00:11:07.565 }, 00:11:07.565 "peer_address": { 00:11:07.565 "trtype": "TCP", 00:11:07.565 "adrfam": "IPv4", 00:11:07.565 "traddr": "10.0.0.1", 00:11:07.565 "trsvcid": "57596" 00:11:07.565 }, 00:11:07.565 "auth": { 00:11:07.565 "state": "completed", 00:11:07.565 "digest": "sha512", 00:11:07.565 "dhgroup": "null" 00:11:07.565 } 00:11:07.565 } 00:11:07.565 ]' 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.565 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.824 16:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:08.757 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.015 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.279 00:11:09.279 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.279 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.279 16:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.539 { 00:11:09.539 "cntlid": 101, 00:11:09.539 "qid": 0, 00:11:09.539 "state": "enabled", 00:11:09.539 "thread": "nvmf_tgt_poll_group_000", 00:11:09.539 "listen_address": { 00:11:09.539 "trtype": "TCP", 00:11:09.539 "adrfam": "IPv4", 00:11:09.539 "traddr": "10.0.0.2", 00:11:09.539 "trsvcid": "4420" 00:11:09.539 }, 00:11:09.539 "peer_address": { 00:11:09.539 "trtype": "TCP", 00:11:09.539 "adrfam": "IPv4", 00:11:09.539 "traddr": "10.0.0.1", 00:11:09.539 "trsvcid": "57622" 00:11:09.539 }, 00:11:09.539 "auth": { 00:11:09.539 "state": "completed", 00:11:09.539 "digest": "sha512", 00:11:09.539 "dhgroup": "null" 00:11:09.539 } 00:11:09.539 } 00:11:09.539 ]' 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.539 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.796 16:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:10.727 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.985 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:11.243 00:11:11.243 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.243 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.243 16:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.501 { 00:11:11.501 "cntlid": 103, 00:11:11.501 "qid": 0, 00:11:11.501 "state": "enabled", 00:11:11.501 "thread": "nvmf_tgt_poll_group_000", 00:11:11.501 "listen_address": { 00:11:11.501 "trtype": "TCP", 00:11:11.501 "adrfam": "IPv4", 00:11:11.501 "traddr": "10.0.0.2", 00:11:11.501 "trsvcid": "4420" 00:11:11.501 }, 00:11:11.501 "peer_address": { 00:11:11.501 "trtype": "TCP", 00:11:11.501 "adrfam": "IPv4", 00:11:11.501 "traddr": "10.0.0.1", 00:11:11.501 "trsvcid": "53272" 00:11:11.501 }, 00:11:11.501 "auth": { 00:11:11.501 "state": "completed", 00:11:11.501 "digest": "sha512", 00:11:11.501 "dhgroup": "null" 00:11:11.501 } 00:11:11.501 } 00:11:11.501 ]' 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.501 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.502 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.760 16:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:12.693 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.951 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.208 00:11:13.208 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.208 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.208 16:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.466 { 00:11:13.466 "cntlid": 105, 00:11:13.466 "qid": 0, 00:11:13.466 "state": "enabled", 00:11:13.466 "thread": "nvmf_tgt_poll_group_000", 00:11:13.466 "listen_address": { 00:11:13.466 "trtype": "TCP", 00:11:13.466 "adrfam": "IPv4", 00:11:13.466 "traddr": "10.0.0.2", 00:11:13.466 "trsvcid": "4420" 00:11:13.466 }, 00:11:13.466 "peer_address": { 00:11:13.466 "trtype": "TCP", 00:11:13.466 "adrfam": "IPv4", 00:11:13.466 "traddr": "10.0.0.1", 00:11:13.466 "trsvcid": "53304" 00:11:13.466 }, 00:11:13.466 "auth": { 00:11:13.466 "state": "completed", 00:11:13.466 "digest": "sha512", 00:11:13.466 "dhgroup": "ffdhe2048" 00:11:13.466 } 00:11:13.466 } 00:11:13.466 ]' 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:13.466 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.724 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.724 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.724 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.724 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.724 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.982 16:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:14.548 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.806 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.373 00:11:15.373 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.373 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.373 16:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.631 { 00:11:15.631 "cntlid": 107, 00:11:15.631 "qid": 0, 00:11:15.631 "state": "enabled", 00:11:15.631 "thread": "nvmf_tgt_poll_group_000", 00:11:15.631 "listen_address": { 00:11:15.631 "trtype": "TCP", 00:11:15.631 "adrfam": "IPv4", 00:11:15.631 "traddr": "10.0.0.2", 00:11:15.631 "trsvcid": "4420" 00:11:15.631 }, 00:11:15.631 "peer_address": { 00:11:15.631 "trtype": "TCP", 00:11:15.631 "adrfam": "IPv4", 00:11:15.631 "traddr": "10.0.0.1", 00:11:15.631 "trsvcid": "53346" 00:11:15.631 }, 00:11:15.631 "auth": { 00:11:15.631 "state": "completed", 00:11:15.631 "digest": "sha512", 00:11:15.631 "dhgroup": "ffdhe2048" 00:11:15.631 } 00:11:15.631 } 00:11:15.631 ]' 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.631 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.889 16:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.822 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.080 00:11:17.080 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.080 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.080 16:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.337 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.337 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.337 16:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.337 16:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.337 16:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.338 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.338 { 00:11:17.338 "cntlid": 109, 00:11:17.338 "qid": 0, 00:11:17.338 "state": "enabled", 00:11:17.338 "thread": "nvmf_tgt_poll_group_000", 00:11:17.338 "listen_address": { 00:11:17.338 "trtype": "TCP", 00:11:17.338 "adrfam": "IPv4", 00:11:17.338 "traddr": "10.0.0.2", 00:11:17.338 "trsvcid": "4420" 00:11:17.338 }, 00:11:17.338 "peer_address": { 00:11:17.338 "trtype": "TCP", 00:11:17.338 "adrfam": "IPv4", 00:11:17.338 "traddr": "10.0.0.1", 00:11:17.338 "trsvcid": "53382" 00:11:17.338 }, 00:11:17.338 "auth": { 00:11:17.338 "state": "completed", 00:11:17.338 "digest": "sha512", 00:11:17.338 "dhgroup": "ffdhe2048" 00:11:17.338 } 00:11:17.338 } 00:11:17.338 ]' 00:11:17.338 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.595 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.853 16:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.785 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:19.043 00:11:19.300 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.300 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.300 16:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.300 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.300 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.300 16:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.300 16:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.300 16:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.300 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.300 { 00:11:19.300 "cntlid": 111, 00:11:19.300 "qid": 0, 00:11:19.300 "state": "enabled", 00:11:19.300 "thread": "nvmf_tgt_poll_group_000", 00:11:19.300 "listen_address": { 00:11:19.300 "trtype": "TCP", 00:11:19.300 "adrfam": "IPv4", 00:11:19.300 "traddr": "10.0.0.2", 00:11:19.300 "trsvcid": "4420" 00:11:19.300 }, 00:11:19.300 "peer_address": { 00:11:19.300 "trtype": "TCP", 00:11:19.300 "adrfam": "IPv4", 00:11:19.300 "traddr": "10.0.0.1", 00:11:19.300 "trsvcid": "53418" 00:11:19.300 }, 00:11:19.300 "auth": { 00:11:19.300 "state": "completed", 00:11:19.300 "digest": "sha512", 00:11:19.300 "dhgroup": "ffdhe2048" 00:11:19.300 } 00:11:19.300 } 00:11:19.300 ]' 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.556 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.811 16:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:20.372 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.372 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:20.372 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.372 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:20.630 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.888 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.146 00:11:21.146 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.146 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.146 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.404 { 00:11:21.404 "cntlid": 113, 00:11:21.404 "qid": 0, 00:11:21.404 "state": "enabled", 00:11:21.404 "thread": "nvmf_tgt_poll_group_000", 00:11:21.404 "listen_address": { 00:11:21.404 "trtype": "TCP", 00:11:21.404 "adrfam": "IPv4", 00:11:21.404 "traddr": "10.0.0.2", 00:11:21.404 "trsvcid": "4420" 00:11:21.404 }, 00:11:21.404 "peer_address": { 00:11:21.404 "trtype": "TCP", 00:11:21.404 "adrfam": "IPv4", 00:11:21.404 "traddr": "10.0.0.1", 00:11:21.404 "trsvcid": "36362" 00:11:21.404 }, 00:11:21.404 "auth": { 00:11:21.404 "state": "completed", 00:11:21.404 "digest": "sha512", 00:11:21.404 "dhgroup": "ffdhe3072" 00:11:21.404 } 00:11:21.404 } 00:11:21.404 ]' 00:11:21.404 16:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.404 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.662 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:22.596 16:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:22.596 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.855 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.113 00:11:23.113 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.113 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.113 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.371 { 00:11:23.371 "cntlid": 115, 00:11:23.371 "qid": 0, 00:11:23.371 "state": "enabled", 00:11:23.371 "thread": "nvmf_tgt_poll_group_000", 00:11:23.371 "listen_address": { 00:11:23.371 "trtype": "TCP", 00:11:23.371 "adrfam": "IPv4", 00:11:23.371 "traddr": "10.0.0.2", 00:11:23.371 "trsvcid": "4420" 00:11:23.371 }, 00:11:23.371 "peer_address": { 00:11:23.371 "trtype": "TCP", 00:11:23.371 "adrfam": "IPv4", 00:11:23.371 "traddr": "10.0.0.1", 00:11:23.371 "trsvcid": "36398" 00:11:23.371 }, 00:11:23.371 "auth": { 00:11:23.371 "state": "completed", 00:11:23.371 "digest": "sha512", 00:11:23.371 "dhgroup": "ffdhe3072" 00:11:23.371 } 00:11:23.371 } 00:11:23.371 ]' 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.371 16:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.371 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.371 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.371 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.631 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:24.561 16:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.561 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.126 00:11:25.126 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.126 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.126 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.382 { 00:11:25.382 "cntlid": 117, 00:11:25.382 "qid": 0, 00:11:25.382 "state": "enabled", 00:11:25.382 "thread": "nvmf_tgt_poll_group_000", 00:11:25.382 "listen_address": { 00:11:25.382 "trtype": "TCP", 00:11:25.382 "adrfam": "IPv4", 00:11:25.382 "traddr": "10.0.0.2", 00:11:25.382 "trsvcid": "4420" 00:11:25.382 }, 00:11:25.382 "peer_address": { 00:11:25.382 "trtype": "TCP", 00:11:25.382 "adrfam": "IPv4", 00:11:25.382 "traddr": "10.0.0.1", 00:11:25.382 "trsvcid": "36434" 00:11:25.382 }, 00:11:25.382 "auth": { 00:11:25.382 "state": "completed", 00:11:25.382 "digest": "sha512", 00:11:25.382 "dhgroup": "ffdhe3072" 00:11:25.382 } 00:11:25.382 } 00:11:25.382 ]' 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.382 16:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.382 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:25.382 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.382 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.382 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.382 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.638 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:26.569 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.569 16:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:26.569 16:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.569 16:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.569 16:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:26.569 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.136 00:11:27.136 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.136 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.136 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.395 { 00:11:27.395 "cntlid": 119, 00:11:27.395 "qid": 0, 00:11:27.395 "state": "enabled", 00:11:27.395 "thread": "nvmf_tgt_poll_group_000", 00:11:27.395 "listen_address": { 00:11:27.395 "trtype": "TCP", 00:11:27.395 "adrfam": "IPv4", 00:11:27.395 "traddr": "10.0.0.2", 00:11:27.395 "trsvcid": "4420" 00:11:27.395 }, 00:11:27.395 "peer_address": { 00:11:27.395 "trtype": "TCP", 00:11:27.395 "adrfam": "IPv4", 00:11:27.395 "traddr": "10.0.0.1", 00:11:27.395 "trsvcid": "36462" 00:11:27.395 }, 00:11:27.395 "auth": { 00:11:27.395 "state": "completed", 00:11:27.395 "digest": "sha512", 00:11:27.395 "dhgroup": "ffdhe3072" 00:11:27.395 } 00:11:27.395 } 00:11:27.395 ]' 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.395 16:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.395 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.395 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.395 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.395 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.653 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:28.588 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.588 16:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:28.588 16:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.588 16:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.588 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.846 00:11:29.105 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.105 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.105 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.361 { 00:11:29.361 "cntlid": 121, 00:11:29.361 "qid": 0, 00:11:29.361 "state": "enabled", 00:11:29.361 "thread": "nvmf_tgt_poll_group_000", 00:11:29.361 "listen_address": { 00:11:29.361 "trtype": "TCP", 00:11:29.361 "adrfam": "IPv4", 00:11:29.361 "traddr": "10.0.0.2", 00:11:29.361 "trsvcid": "4420" 00:11:29.361 }, 00:11:29.361 "peer_address": { 00:11:29.361 "trtype": "TCP", 00:11:29.361 "adrfam": "IPv4", 00:11:29.361 "traddr": "10.0.0.1", 00:11:29.361 "trsvcid": "36496" 00:11:29.361 }, 00:11:29.361 "auth": { 00:11:29.361 "state": "completed", 00:11:29.361 "digest": "sha512", 00:11:29.361 "dhgroup": "ffdhe4096" 00:11:29.361 } 00:11:29.361 } 00:11:29.361 ]' 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.361 16:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.361 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.361 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.361 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.618 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:30.552 16:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.552 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.118 00:11:31.118 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.118 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.118 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.375 { 00:11:31.375 "cntlid": 123, 00:11:31.375 "qid": 0, 00:11:31.375 "state": "enabled", 00:11:31.375 "thread": "nvmf_tgt_poll_group_000", 00:11:31.375 "listen_address": { 00:11:31.375 "trtype": "TCP", 00:11:31.375 "adrfam": "IPv4", 00:11:31.375 "traddr": "10.0.0.2", 00:11:31.375 "trsvcid": "4420" 00:11:31.375 }, 00:11:31.375 "peer_address": { 00:11:31.375 "trtype": "TCP", 00:11:31.375 "adrfam": "IPv4", 00:11:31.375 "traddr": "10.0.0.1", 00:11:31.375 "trsvcid": "48112" 00:11:31.375 }, 00:11:31.375 "auth": { 00:11:31.375 "state": "completed", 00:11:31.375 "digest": "sha512", 00:11:31.375 "dhgroup": "ffdhe4096" 00:11:31.375 } 00:11:31.375 } 00:11:31.375 ]' 00:11:31.375 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.376 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.376 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.376 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.376 16:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.376 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.376 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.376 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.633 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:32.578 16:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.578 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.161 00:11:33.161 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.161 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.161 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.420 { 00:11:33.420 "cntlid": 125, 00:11:33.420 "qid": 0, 00:11:33.420 "state": "enabled", 00:11:33.420 "thread": "nvmf_tgt_poll_group_000", 00:11:33.420 "listen_address": { 00:11:33.420 "trtype": "TCP", 00:11:33.420 "adrfam": "IPv4", 00:11:33.420 "traddr": "10.0.0.2", 00:11:33.420 "trsvcid": "4420" 00:11:33.420 }, 00:11:33.420 "peer_address": { 00:11:33.420 "trtype": "TCP", 00:11:33.420 "adrfam": "IPv4", 00:11:33.420 "traddr": "10.0.0.1", 00:11:33.420 "trsvcid": "48148" 00:11:33.420 }, 00:11:33.420 "auth": { 00:11:33.420 "state": "completed", 00:11:33.420 "digest": "sha512", 00:11:33.420 "dhgroup": "ffdhe4096" 00:11:33.420 } 00:11:33.420 } 00:11:33.420 ]' 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.420 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.421 16:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.421 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.421 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.421 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.421 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.421 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.679 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:34.613 16:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.613 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.871 00:11:34.871 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.871 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.871 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.436 { 00:11:35.436 "cntlid": 127, 00:11:35.436 "qid": 0, 00:11:35.436 "state": "enabled", 00:11:35.436 "thread": "nvmf_tgt_poll_group_000", 00:11:35.436 "listen_address": { 00:11:35.436 "trtype": "TCP", 00:11:35.436 "adrfam": "IPv4", 00:11:35.436 "traddr": "10.0.0.2", 00:11:35.436 "trsvcid": "4420" 00:11:35.436 }, 00:11:35.436 "peer_address": { 00:11:35.436 "trtype": "TCP", 00:11:35.436 "adrfam": "IPv4", 00:11:35.436 "traddr": "10.0.0.1", 00:11:35.436 "trsvcid": "48188" 00:11:35.436 }, 00:11:35.436 "auth": { 00:11:35.436 "state": "completed", 00:11:35.436 "digest": "sha512", 00:11:35.436 "dhgroup": "ffdhe4096" 00:11:35.436 } 00:11:35.436 } 00:11:35.436 ]' 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.436 16:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.436 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.436 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.436 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.436 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.436 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.694 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:36.259 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.259 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:36.259 16:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.259 16:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.517 16:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.517 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.517 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.517 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:36.517 16:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.517 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.101 00:11:37.101 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.101 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.101 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.359 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.359 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.359 16:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.359 16:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.359 16:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.359 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.359 { 00:11:37.359 "cntlid": 129, 00:11:37.359 "qid": 0, 00:11:37.359 "state": "enabled", 00:11:37.359 "thread": "nvmf_tgt_poll_group_000", 00:11:37.359 "listen_address": { 00:11:37.359 "trtype": "TCP", 00:11:37.359 "adrfam": "IPv4", 00:11:37.359 "traddr": "10.0.0.2", 00:11:37.359 "trsvcid": "4420" 00:11:37.359 }, 00:11:37.359 "peer_address": { 00:11:37.360 "trtype": "TCP", 00:11:37.360 "adrfam": "IPv4", 00:11:37.360 "traddr": "10.0.0.1", 00:11:37.360 "trsvcid": "48214" 00:11:37.360 }, 00:11:37.360 "auth": { 00:11:37.360 "state": "completed", 00:11:37.360 "digest": "sha512", 00:11:37.360 "dhgroup": "ffdhe6144" 00:11:37.360 } 00:11:37.360 } 00:11:37.360 ]' 00:11:37.360 16:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.360 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.360 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.360 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.360 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.617 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.617 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.617 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.876 16:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:38.442 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.700 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.265 00:11:39.265 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.265 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.265 16:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.523 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.523 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.523 16:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.523 16:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.523 16:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.523 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.523 { 00:11:39.523 "cntlid": 131, 00:11:39.523 "qid": 0, 00:11:39.523 "state": "enabled", 00:11:39.524 "thread": "nvmf_tgt_poll_group_000", 00:11:39.524 "listen_address": { 00:11:39.524 "trtype": "TCP", 00:11:39.524 "adrfam": "IPv4", 00:11:39.524 "traddr": "10.0.0.2", 00:11:39.524 "trsvcid": "4420" 00:11:39.524 }, 00:11:39.524 "peer_address": { 00:11:39.524 "trtype": "TCP", 00:11:39.524 "adrfam": "IPv4", 00:11:39.524 "traddr": "10.0.0.1", 00:11:39.524 "trsvcid": "48238" 00:11:39.524 }, 00:11:39.524 "auth": { 00:11:39.524 "state": "completed", 00:11:39.524 "digest": "sha512", 00:11:39.524 "dhgroup": "ffdhe6144" 00:11:39.524 } 00:11:39.524 } 00:11:39.524 ]' 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.524 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.782 16:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:40.347 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.605 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.171 00:11:41.171 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.171 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.171 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.430 { 00:11:41.430 "cntlid": 133, 00:11:41.430 "qid": 0, 00:11:41.430 "state": "enabled", 00:11:41.430 "thread": "nvmf_tgt_poll_group_000", 00:11:41.430 "listen_address": { 00:11:41.430 "trtype": "TCP", 00:11:41.430 "adrfam": "IPv4", 00:11:41.430 "traddr": "10.0.0.2", 00:11:41.430 "trsvcid": "4420" 00:11:41.430 }, 00:11:41.430 "peer_address": { 00:11:41.430 "trtype": "TCP", 00:11:41.430 "adrfam": "IPv4", 00:11:41.430 "traddr": "10.0.0.1", 00:11:41.430 "trsvcid": "54832" 00:11:41.430 }, 00:11:41.430 "auth": { 00:11:41.430 "state": "completed", 00:11:41.430 "digest": "sha512", 00:11:41.430 "dhgroup": "ffdhe6144" 00:11:41.430 } 00:11:41.430 } 00:11:41.430 ]' 00:11:41.430 16:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.430 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.689 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:42.626 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.626 16:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:42.626 16:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.626 16:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.626 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:43.196 00:11:43.196 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.196 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.196 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.496 { 00:11:43.496 "cntlid": 135, 00:11:43.496 "qid": 0, 00:11:43.496 "state": "enabled", 00:11:43.496 "thread": "nvmf_tgt_poll_group_000", 00:11:43.496 "listen_address": { 00:11:43.496 "trtype": "TCP", 00:11:43.496 "adrfam": "IPv4", 00:11:43.496 "traddr": "10.0.0.2", 00:11:43.496 "trsvcid": "4420" 00:11:43.496 }, 00:11:43.496 "peer_address": { 00:11:43.496 "trtype": "TCP", 00:11:43.496 "adrfam": "IPv4", 00:11:43.496 "traddr": "10.0.0.1", 00:11:43.496 "trsvcid": "54860" 00:11:43.496 }, 00:11:43.496 "auth": { 00:11:43.496 "state": "completed", 00:11:43.496 "digest": "sha512", 00:11:43.496 "dhgroup": "ffdhe6144" 00:11:43.496 } 00:11:43.496 } 00:11:43.496 ]' 00:11:43.496 16:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.496 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.755 16:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:44.690 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.691 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.691 16:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.691 16:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.691 16:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.691 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.691 16:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.626 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.626 { 00:11:45.626 "cntlid": 137, 00:11:45.626 "qid": 0, 00:11:45.626 "state": "enabled", 00:11:45.626 "thread": "nvmf_tgt_poll_group_000", 00:11:45.626 "listen_address": { 00:11:45.626 "trtype": "TCP", 00:11:45.626 "adrfam": "IPv4", 00:11:45.626 "traddr": "10.0.0.2", 00:11:45.626 "trsvcid": "4420" 00:11:45.626 }, 00:11:45.626 "peer_address": { 00:11:45.626 "trtype": "TCP", 00:11:45.626 "adrfam": "IPv4", 00:11:45.626 "traddr": "10.0.0.1", 00:11:45.626 "trsvcid": "54892" 00:11:45.626 }, 00:11:45.626 "auth": { 00:11:45.626 "state": "completed", 00:11:45.626 "digest": "sha512", 00:11:45.626 "dhgroup": "ffdhe8192" 00:11:45.626 } 00:11:45.626 } 00:11:45.626 ]' 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.626 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.885 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.885 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.885 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.144 16:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:46.713 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.973 16:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.541 00:11:47.541 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.541 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.541 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.800 { 00:11:47.800 "cntlid": 139, 00:11:47.800 "qid": 0, 00:11:47.800 "state": "enabled", 00:11:47.800 "thread": "nvmf_tgt_poll_group_000", 00:11:47.800 "listen_address": { 00:11:47.800 "trtype": "TCP", 00:11:47.800 "adrfam": "IPv4", 00:11:47.800 "traddr": "10.0.0.2", 00:11:47.800 "trsvcid": "4420" 00:11:47.800 }, 00:11:47.800 "peer_address": { 00:11:47.800 "trtype": "TCP", 00:11:47.800 "adrfam": "IPv4", 00:11:47.800 "traddr": "10.0.0.1", 00:11:47.800 "trsvcid": "54916" 00:11:47.800 }, 00:11:47.800 "auth": { 00:11:47.800 "state": "completed", 00:11:47.800 "digest": "sha512", 00:11:47.800 "dhgroup": "ffdhe8192" 00:11:47.800 } 00:11:47.800 } 00:11:47.800 ]' 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.800 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.059 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.059 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.059 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.059 16:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:01:NWFmN2NkMmRmNDdkM2QyZWRkYWZhMTVkNDI4MzE4NjZt5wuv: --dhchap-ctrl-secret DHHC-1:02:Y2ZiNTZkZTc3MzhjNzczZGNkNzU5M2NhNGI2N2Y0MjM4MTliMTVhZDIxNDc0OWMw+xMyTQ==: 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:48.996 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.255 16:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.824 00:11:49.824 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.824 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.824 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.082 { 00:11:50.082 "cntlid": 141, 00:11:50.082 "qid": 0, 00:11:50.082 "state": "enabled", 00:11:50.082 "thread": "nvmf_tgt_poll_group_000", 00:11:50.082 "listen_address": { 00:11:50.082 "trtype": "TCP", 00:11:50.082 "adrfam": "IPv4", 00:11:50.082 "traddr": "10.0.0.2", 00:11:50.082 "trsvcid": "4420" 00:11:50.082 }, 00:11:50.082 "peer_address": { 00:11:50.082 "trtype": "TCP", 00:11:50.082 "adrfam": "IPv4", 00:11:50.082 "traddr": "10.0.0.1", 00:11:50.082 "trsvcid": "54942" 00:11:50.082 }, 00:11:50.082 "auth": { 00:11:50.082 "state": "completed", 00:11:50.082 "digest": "sha512", 00:11:50.082 "dhgroup": "ffdhe8192" 00:11:50.082 } 00:11:50.082 } 00:11:50.082 ]' 00:11:50.082 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.340 16:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.598 16:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:02:MjQ5MzllYjZjNjM2M2E3ZDExZWI0Y2Q3M2U5NTEzY2NmMzZlYWU3OGU2YTE0NDgz6O9Mog==: --dhchap-ctrl-secret DHHC-1:01:OGQ3ZWU0ZjIzNzhkMjBjODg5OWQzN2Y0OTAzNDBmZGYPHYLc: 00:11:51.164 16:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:51.422 16:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:51.681 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.248 00:11:52.248 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.248 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.248 16:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.508 { 00:11:52.508 "cntlid": 143, 00:11:52.508 "qid": 0, 00:11:52.508 "state": "enabled", 00:11:52.508 "thread": "nvmf_tgt_poll_group_000", 00:11:52.508 "listen_address": { 00:11:52.508 "trtype": "TCP", 00:11:52.508 "adrfam": "IPv4", 00:11:52.508 "traddr": "10.0.0.2", 00:11:52.508 "trsvcid": "4420" 00:11:52.508 }, 00:11:52.508 "peer_address": { 00:11:52.508 "trtype": "TCP", 00:11:52.508 "adrfam": "IPv4", 00:11:52.508 "traddr": "10.0.0.1", 00:11:52.508 "trsvcid": "59388" 00:11:52.508 }, 00:11:52.508 "auth": { 00:11:52.508 "state": "completed", 00:11:52.508 "digest": "sha512", 00:11:52.508 "dhgroup": "ffdhe8192" 00:11:52.508 } 00:11:52.508 } 00:11:52.508 ]' 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.508 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.767 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.767 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.767 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.767 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.767 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.025 16:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:53.962 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.221 16:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.788 00:11:54.788 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.788 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.788 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.046 { 00:11:55.046 "cntlid": 145, 00:11:55.046 "qid": 0, 00:11:55.046 "state": "enabled", 00:11:55.046 "thread": "nvmf_tgt_poll_group_000", 00:11:55.046 "listen_address": { 00:11:55.046 "trtype": "TCP", 00:11:55.046 "adrfam": "IPv4", 00:11:55.046 "traddr": "10.0.0.2", 00:11:55.046 "trsvcid": "4420" 00:11:55.046 }, 00:11:55.046 "peer_address": { 00:11:55.046 "trtype": "TCP", 00:11:55.046 "adrfam": "IPv4", 00:11:55.046 "traddr": "10.0.0.1", 00:11:55.046 "trsvcid": "59412" 00:11:55.046 }, 00:11:55.046 "auth": { 00:11:55.046 "state": "completed", 00:11:55.046 "digest": "sha512", 00:11:55.046 "dhgroup": "ffdhe8192" 00:11:55.046 } 00:11:55.046 } 00:11:55.046 ]' 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.046 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.304 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.304 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.304 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.304 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.304 16:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.562 16:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:00:YzE3NzYxYWRkNTEyYjQwNmQ1NTRhYjhiMzkyNzk4MDVhMmEzYzdmOWNlNWU5MjI4FClD+A==: --dhchap-ctrl-secret DHHC-1:03:YjVmNTY5NmFhMzc4OTEzZmZlZWVmNWUzM2M2YTMwM2U2NDliNWExNzE4NjQ1MjU4ZGIxODMwYTIzZGQ0ODJhMryvB10=: 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:56.499 16:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:57.064 request: 00:11:57.064 { 00:11:57.064 "name": "nvme0", 00:11:57.064 "trtype": "tcp", 00:11:57.064 "traddr": "10.0.0.2", 00:11:57.064 "adrfam": "ipv4", 00:11:57.064 "trsvcid": "4420", 00:11:57.064 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:57.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10", 00:11:57.064 "prchk_reftag": false, 00:11:57.064 "prchk_guard": false, 00:11:57.064 "hdgst": false, 00:11:57.064 "ddgst": false, 00:11:57.064 "dhchap_key": "key2", 00:11:57.064 "method": "bdev_nvme_attach_controller", 00:11:57.064 "req_id": 1 00:11:57.064 } 00:11:57.064 Got JSON-RPC error response 00:11:57.064 response: 00:11:57.064 { 00:11:57.064 "code": -5, 00:11:57.064 "message": "Input/output error" 00:11:57.064 } 00:11:57.064 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:57.064 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.064 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:57.065 16:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:57.676 request: 00:11:57.676 { 00:11:57.676 "name": "nvme0", 00:11:57.676 "trtype": "tcp", 00:11:57.676 "traddr": "10.0.0.2", 00:11:57.676 "adrfam": "ipv4", 00:11:57.676 "trsvcid": "4420", 00:11:57.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:57.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10", 00:11:57.676 "prchk_reftag": false, 00:11:57.676 "prchk_guard": false, 00:11:57.676 "hdgst": false, 00:11:57.676 "ddgst": false, 00:11:57.676 "dhchap_key": "key1", 00:11:57.676 "dhchap_ctrlr_key": "ckey2", 00:11:57.676 "method": "bdev_nvme_attach_controller", 00:11:57.676 "req_id": 1 00:11:57.676 } 00:11:57.676 Got JSON-RPC error response 00:11:57.676 response: 00:11:57.676 { 00:11:57.676 "code": -5, 00:11:57.676 "message": "Input/output error" 00:11:57.676 } 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key1 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.676 16:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.243 request: 00:11:58.243 { 00:11:58.243 "name": "nvme0", 00:11:58.243 "trtype": "tcp", 00:11:58.243 "traddr": "10.0.0.2", 00:11:58.243 "adrfam": "ipv4", 00:11:58.243 "trsvcid": "4420", 00:11:58.243 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:58.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10", 00:11:58.243 "prchk_reftag": false, 00:11:58.243 "prchk_guard": false, 00:11:58.243 "hdgst": false, 00:11:58.243 "ddgst": false, 00:11:58.243 "dhchap_key": "key1", 00:11:58.243 "dhchap_ctrlr_key": "ckey1", 00:11:58.243 "method": "bdev_nvme_attach_controller", 00:11:58.243 "req_id": 1 00:11:58.243 } 00:11:58.243 Got JSON-RPC error response 00:11:58.243 response: 00:11:58.243 { 00:11:58.243 "code": -5, 00:11:58.243 "message": "Input/output error" 00:11:58.243 } 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68841 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 68841 ']' 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 68841 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.243 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68841 00:11:58.501 killing process with pid 68841 00:11:58.501 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:58.501 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:58.501 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68841' 00:11:58.501 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 68841 00:11:58.501 16:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 68841 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71852 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71852 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 71852 ']' 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.501 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.759 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.759 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:58.759 16:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.759 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.759 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71852 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 71852 ']' 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.018 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.288 16:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.235 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.235 { 00:12:00.235 "cntlid": 1, 00:12:00.235 "qid": 0, 00:12:00.235 "state": "enabled", 00:12:00.235 "thread": "nvmf_tgt_poll_group_000", 00:12:00.235 "listen_address": { 00:12:00.235 "trtype": "TCP", 00:12:00.235 "adrfam": "IPv4", 00:12:00.235 "traddr": "10.0.0.2", 00:12:00.235 "trsvcid": "4420" 00:12:00.235 }, 00:12:00.235 "peer_address": { 00:12:00.235 "trtype": "TCP", 00:12:00.235 "adrfam": "IPv4", 00:12:00.235 "traddr": "10.0.0.1", 00:12:00.235 "trsvcid": "59470" 00:12:00.235 }, 00:12:00.235 "auth": { 00:12:00.235 "state": "completed", 00:12:00.235 "digest": "sha512", 00:12:00.235 "dhgroup": "ffdhe8192" 00:12:00.235 } 00:12:00.235 } 00:12:00.235 ]' 00:12:00.235 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.492 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.492 16:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.492 16:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:00.492 16:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.492 16:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.492 16:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.492 16:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.750 16:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid 0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-secret DHHC-1:03:OTIxOGEzMmU1YmViZGMyZTdjZWM3MTVjNzY1YTU0YTdmMGNjYTk1NjA4OGM5MWZkYjg2MTkxODg0OTBlN2UzNzHwBLM=: 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --dhchap-key key3 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:01.686 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.945 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.203 request: 00:12:02.203 { 00:12:02.203 "name": "nvme0", 00:12:02.203 "trtype": "tcp", 00:12:02.203 "traddr": "10.0.0.2", 00:12:02.203 "adrfam": "ipv4", 00:12:02.203 "trsvcid": "4420", 00:12:02.203 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:02.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10", 00:12:02.203 "prchk_reftag": false, 00:12:02.203 "prchk_guard": false, 00:12:02.203 "hdgst": false, 00:12:02.203 "ddgst": false, 00:12:02.203 "dhchap_key": "key3", 00:12:02.203 "method": "bdev_nvme_attach_controller", 00:12:02.203 "req_id": 1 00:12:02.203 } 00:12:02.203 Got JSON-RPC error response 00:12:02.203 response: 00:12:02.203 { 00:12:02.203 "code": -5, 00:12:02.203 "message": "Input/output error" 00:12:02.203 } 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:02.203 16:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:02.462 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.462 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:02.462 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.462 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:02.720 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.720 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:02.720 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.720 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.720 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.979 request: 00:12:02.979 { 00:12:02.979 "name": "nvme0", 00:12:02.979 "trtype": "tcp", 00:12:02.979 "traddr": "10.0.0.2", 00:12:02.979 "adrfam": "ipv4", 00:12:02.979 "trsvcid": "4420", 00:12:02.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:02.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10", 00:12:02.979 "prchk_reftag": false, 00:12:02.979 "prchk_guard": false, 00:12:02.979 "hdgst": false, 00:12:02.979 "ddgst": false, 00:12:02.979 "dhchap_key": "key3", 00:12:02.979 "method": "bdev_nvme_attach_controller", 00:12:02.979 "req_id": 1 00:12:02.979 } 00:12:02.979 Got JSON-RPC error response 00:12:02.979 response: 00:12:02.979 { 00:12:02.979 "code": -5, 00:12:02.979 "message": "Input/output error" 00:12:02.979 } 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:02.979 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:03.238 16:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:03.497 request: 00:12:03.497 { 00:12:03.497 "name": "nvme0", 00:12:03.497 "trtype": "tcp", 00:12:03.497 "traddr": "10.0.0.2", 00:12:03.497 "adrfam": "ipv4", 00:12:03.497 "trsvcid": "4420", 00:12:03.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:03.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10", 00:12:03.497 "prchk_reftag": false, 00:12:03.497 "prchk_guard": false, 00:12:03.497 "hdgst": false, 00:12:03.497 "ddgst": false, 00:12:03.497 "dhchap_key": "key0", 00:12:03.497 "dhchap_ctrlr_key": "key1", 00:12:03.497 "method": "bdev_nvme_attach_controller", 00:12:03.497 "req_id": 1 00:12:03.497 } 00:12:03.497 Got JSON-RPC error response 00:12:03.497 response: 00:12:03.497 { 00:12:03.497 "code": -5, 00:12:03.497 "message": "Input/output error" 00:12:03.497 } 00:12:03.497 16:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:03.497 16:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.497 16:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.497 16:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.497 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:03.497 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:03.755 00:12:04.014 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:04.014 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.014 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:04.273 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.273 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.273 16:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68879 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 68879 ']' 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 68879 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68879 00:12:04.532 killing process with pid 68879 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68879' 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 68879 00:12:04.532 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 68879 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.791 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.791 rmmod nvme_tcp 00:12:04.791 rmmod nvme_fabrics 00:12:04.791 rmmod nvme_keyring 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71852 ']' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71852 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 71852 ']' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 71852 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71852 00:12:05.049 killing process with pid 71852 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71852' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 71852 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 71852 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:05.049 16:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yX3 /tmp/spdk.key-sha256.iPJ /tmp/spdk.key-sha384.hgL /tmp/spdk.key-sha512.109 /tmp/spdk.key-sha512.CW3 /tmp/spdk.key-sha384.Qix /tmp/spdk.key-sha256.hSU '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:05.308 ************************************ 00:12:05.308 END TEST nvmf_auth_target 00:12:05.308 ************************************ 00:12:05.308 00:12:05.308 real 2m45.817s 00:12:05.308 user 6m37.474s 00:12:05.308 sys 0m25.639s 00:12:05.308 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.308 16:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.308 16:15:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:05.308 16:15:48 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:05.308 16:15:48 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:05.308 16:15:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:05.308 16:15:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.308 16:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:05.308 ************************************ 00:12:05.308 START TEST nvmf_bdevio_no_huge 00:12:05.308 ************************************ 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:05.308 * Looking for test storage... 00:12:05.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.308 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:05.309 16:15:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:05.309 Cannot find device "nvmf_tgt_br" 00:12:05.309 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:05.309 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.309 Cannot find device "nvmf_tgt_br2" 00:12:05.309 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:05.309 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:05.309 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:05.567 Cannot find device "nvmf_tgt_br" 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:05.568 Cannot find device "nvmf_tgt_br2" 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.568 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:05.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:05.827 00:12:05.827 --- 10.0.0.2 ping statistics --- 00:12:05.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.827 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:05.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:05.827 00:12:05.827 --- 10.0.0.3 ping statistics --- 00:12:05.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.827 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:05.827 00:12:05.827 --- 10.0.0.1 ping statistics --- 00:12:05.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.827 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72166 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72166 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72166 ']' 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.827 16:15:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:05.827 [2024-07-12 16:15:49.474695] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:05.827 [2024-07-12 16:15:49.474809] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:06.086 [2024-07-12 16:15:49.628387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.086 [2024-07-12 16:15:49.771247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.086 [2024-07-12 16:15:49.771558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.086 [2024-07-12 16:15:49.771723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.086 [2024-07-12 16:15:49.772029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.086 [2024-07-12 16:15:49.772212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.086 [2024-07-12 16:15:49.773113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:06.086 [2024-07-12 16:15:49.773271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:06.086 [2024-07-12 16:15:49.773447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:06.086 [2024-07-12 16:15:49.773535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.086 [2024-07-12 16:15:49.783798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.021 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 [2024-07-12 16:15:50.553380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 Malloc0 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 [2024-07-12 16:15:50.601455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:07.022 { 00:12:07.022 "params": { 00:12:07.022 "name": "Nvme$subsystem", 00:12:07.022 "trtype": "$TEST_TRANSPORT", 00:12:07.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:07.022 "adrfam": "ipv4", 00:12:07.022 "trsvcid": "$NVMF_PORT", 00:12:07.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:07.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:07.022 "hdgst": ${hdgst:-false}, 00:12:07.022 "ddgst": ${ddgst:-false} 00:12:07.022 }, 00:12:07.022 "method": "bdev_nvme_attach_controller" 00:12:07.022 } 00:12:07.022 EOF 00:12:07.022 )") 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:07.022 16:15:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:07.022 "params": { 00:12:07.022 "name": "Nvme1", 00:12:07.022 "trtype": "tcp", 00:12:07.022 "traddr": "10.0.0.2", 00:12:07.022 "adrfam": "ipv4", 00:12:07.022 "trsvcid": "4420", 00:12:07.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:07.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:07.022 "hdgst": false, 00:12:07.022 "ddgst": false 00:12:07.022 }, 00:12:07.022 "method": "bdev_nvme_attach_controller" 00:12:07.022 }' 00:12:07.022 [2024-07-12 16:15:50.657945] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:07.022 [2024-07-12 16:15:50.658119] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72202 ] 00:12:07.280 [2024-07-12 16:15:50.812364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.280 [2024-07-12 16:15:50.948594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.280 [2024-07-12 16:15:50.948755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.280 [2024-07-12 16:15:50.948811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.280 [2024-07-12 16:15:50.963631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:07.539 I/O targets: 00:12:07.539 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:07.539 00:12:07.539 00:12:07.539 CUnit - A unit testing framework for C - Version 2.1-3 00:12:07.539 http://cunit.sourceforge.net/ 00:12:07.539 00:12:07.539 00:12:07.539 Suite: bdevio tests on: Nvme1n1 00:12:07.539 Test: blockdev write read block ...passed 00:12:07.539 Test: blockdev write zeroes read block ...passed 00:12:07.539 Test: blockdev write zeroes read no split ...passed 00:12:07.539 Test: blockdev write zeroes read split ...passed 00:12:07.539 Test: blockdev write zeroes read split partial ...passed 00:12:07.539 Test: blockdev reset ...[2024-07-12 16:15:51.162457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:07.539 [2024-07-12 16:15:51.162598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1374310 (9): Bad file descriptor 00:12:07.539 [2024-07-12 16:15:51.176985] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:07.539 passed 00:12:07.539 Test: blockdev write read 8 blocks ...passed 00:12:07.539 Test: blockdev write read size > 128k ...passed 00:12:07.539 Test: blockdev write read invalid size ...passed 00:12:07.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.539 Test: blockdev write read max offset ...passed 00:12:07.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.539 Test: blockdev writev readv 8 blocks ...passed 00:12:07.539 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.539 Test: blockdev writev readv block ...passed 00:12:07.539 Test: blockdev writev readv size > 128k ...passed 00:12:07.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.539 Test: blockdev comparev and writev ...[2024-07-12 16:15:51.188036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.188088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.188115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.188129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.188587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.188631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.188654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.188668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.189109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.189153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.189175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.189188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.189575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.189612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.189635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.539 [2024-07-12 16:15:51.189647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:07.539 passed 00:12:07.539 Test: blockdev nvme passthru rw ...passed 00:12:07.539 Test: blockdev nvme passthru vendor specific ...[2024-07-12 16:15:51.191063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.539 [2024-07-12 16:15:51.191097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.191442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.539 [2024-07-12 16:15:51.191480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.191852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.539 [2024-07-12 16:15:51.191917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:07.539 [2024-07-12 16:15:51.192105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.539 [2024-07-12 16:15:51.192470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:07.539 passed 00:12:07.539 Test: blockdev nvme admin passthru ...passed 00:12:07.539 Test: blockdev copy ...passed 00:12:07.539 00:12:07.539 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.539 suites 1 1 n/a 0 0 00:12:07.539 tests 23 23 23 0 0 00:12:07.539 asserts 152 152 152 0 n/a 00:12:07.539 00:12:07.539 Elapsed time = 0.165 seconds 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:08.106 rmmod nvme_tcp 00:12:08.106 rmmod nvme_fabrics 00:12:08.106 rmmod nvme_keyring 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72166 ']' 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72166 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72166 ']' 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72166 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72166 00:12:08.106 killing process with pid 72166 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72166' 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72166 00:12:08.106 16:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72166 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.365 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.624 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:08.624 00:12:08.624 real 0m3.274s 00:12:08.624 user 0m10.500s 00:12:08.624 sys 0m1.247s 00:12:08.624 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.624 ************************************ 00:12:08.624 END TEST nvmf_bdevio_no_huge 00:12:08.624 ************************************ 00:12:08.624 16:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 16:15:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:08.624 16:15:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:08.624 16:15:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.624 16:15:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.624 16:15:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 ************************************ 00:12:08.624 START TEST nvmf_tls 00:12:08.624 ************************************ 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:08.624 * Looking for test storage... 00:12:08.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:08.624 16:15:52 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:08.625 Cannot find device "nvmf_tgt_br" 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.625 Cannot find device "nvmf_tgt_br2" 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:08.625 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:08.884 Cannot find device "nvmf_tgt_br" 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:08.884 Cannot find device "nvmf_tgt_br2" 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.884 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:09.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:12:09.143 00:12:09.143 --- 10.0.0.2 ping statistics --- 00:12:09.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.143 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:09.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:09.143 00:12:09.143 --- 10.0.0.3 ping statistics --- 00:12:09.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.143 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:09.143 00:12:09.143 --- 10.0.0.1 ping statistics --- 00:12:09.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.143 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72383 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72383 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72383 ']' 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.143 16:15:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.143 [2024-07-12 16:15:52.724790] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:09.143 [2024-07-12 16:15:52.724903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.401 [2024-07-12 16:15:52.872061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.401 [2024-07-12 16:15:52.929291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.401 [2024-07-12 16:15:52.929340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.401 [2024-07-12 16:15:52.929351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.401 [2024-07-12 16:15:52.929359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.401 [2024-07-12 16:15:52.929366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.401 [2024-07-12 16:15:52.929396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:10.335 16:15:53 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:10.335 true 00:12:10.335 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:10.335 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:10.593 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:10.593 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:10.593 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:10.851 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:10.851 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:11.110 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:11.110 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:11.110 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:11.368 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:11.368 16:15:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:11.627 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:11.627 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:11.627 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:11.627 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:11.886 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:11.886 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:11.886 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:11.886 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:11.886 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:12.453 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:12.453 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:12.453 16:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:12.453 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:12.453 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:12.712 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.AyhhWNJxxq 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7ct4xxoldp 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.AyhhWNJxxq 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7ct4xxoldp 00:12:12.971 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:13.230 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:13.230 [2024-07-12 16:15:56.953282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:13.490 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.AyhhWNJxxq 00:12:13.490 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AyhhWNJxxq 00:12:13.490 16:15:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:13.490 [2024-07-12 16:15:57.190916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.490 16:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:13.749 16:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:14.008 [2024-07-12 16:15:57.614974] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:14.008 [2024-07-12 16:15:57.615164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.008 16:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:14.266 malloc0 00:12:14.266 16:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:14.525 16:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AyhhWNJxxq 00:12:14.784 [2024-07-12 16:15:58.370025] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:14.784 16:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.AyhhWNJxxq 00:12:26.997 Initializing NVMe Controllers 00:12:26.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:26.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:26.997 Initialization complete. Launching workers. 00:12:26.997 ======================================================== 00:12:26.997 Latency(us) 00:12:26.997 Device Information : IOPS MiB/s Average min max 00:12:26.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10592.29 41.38 6043.37 1268.69 7933.43 00:12:26.997 ======================================================== 00:12:26.997 Total : 10592.29 41.38 6043.37 1268.69 7933.43 00:12:26.997 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AyhhWNJxxq 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AyhhWNJxxq' 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72615 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72615 /var/tmp/bdevperf.sock 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72615 ']' 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.997 16:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:26.997 [2024-07-12 16:16:08.627686] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:26.997 [2024-07-12 16:16:08.627804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72615 ] 00:12:26.997 [2024-07-12 16:16:08.768545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.997 [2024-07-12 16:16:08.838910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.997 [2024-07-12 16:16:08.872546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:26.997 16:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.997 16:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:26.998 16:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AyhhWNJxxq 00:12:26.998 [2024-07-12 16:16:09.809790] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:26.998 [2024-07-12 16:16:09.809918] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:26.998 TLSTESTn1 00:12:26.998 16:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:26.998 Running I/O for 10 seconds... 00:12:36.974 00:12:36.974 Latency(us) 00:12:36.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.974 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:36.974 Verification LBA range: start 0x0 length 0x2000 00:12:36.974 TLSTESTn1 : 10.02 4144.67 16.19 0.00 0.00 30819.64 7566.43 29312.47 00:12:36.974 =================================================================================================================== 00:12:36.974 Total : 4144.67 16.19 0.00 0.00 30819.64 7566.43 29312.47 00:12:36.974 0 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72615 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72615 ']' 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72615 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72615 00:12:36.974 killing process with pid 72615 00:12:36.974 Received shutdown signal, test time was about 10.000000 seconds 00:12:36.974 00:12:36.974 Latency(us) 00:12:36.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.974 =================================================================================================================== 00:12:36.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72615' 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72615 00:12:36.974 [2024-07-12 16:16:20.056400] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72615 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ct4xxoldp 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ct4xxoldp 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ct4xxoldp 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7ct4xxoldp' 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72744 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72744 /var/tmp/bdevperf.sock 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72744 ']' 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.974 [2024-07-12 16:16:20.297261] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:36.974 [2024-07-12 16:16:20.297941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72744 ] 00:12:36.974 [2024-07-12 16:16:20.439192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.974 [2024-07-12 16:16:20.505673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.974 [2024-07-12 16:16:20.539087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:36.974 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ct4xxoldp 00:12:37.234 [2024-07-12 16:16:20.824381] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:37.234 [2024-07-12 16:16:20.824541] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:37.234 [2024-07-12 16:16:20.833082] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:37.234 [2024-07-12 16:16:20.833937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18980a0 (107): Transport endpoint is not connected 00:12:37.234 [2024-07-12 16:16:20.834934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18980a0 (9): Bad file descriptor 00:12:37.234 [2024-07-12 16:16:20.835921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:37.234 [2024-07-12 16:16:20.835990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:37.234 [2024-07-12 16:16:20.836006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:37.234 request: 00:12:37.234 { 00:12:37.234 "name": "TLSTEST", 00:12:37.234 "trtype": "tcp", 00:12:37.234 "traddr": "10.0.0.2", 00:12:37.234 "adrfam": "ipv4", 00:12:37.234 "trsvcid": "4420", 00:12:37.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.234 "prchk_reftag": false, 00:12:37.234 "prchk_guard": false, 00:12:37.234 "hdgst": false, 00:12:37.234 "ddgst": false, 00:12:37.234 "psk": "/tmp/tmp.7ct4xxoldp", 00:12:37.234 "method": "bdev_nvme_attach_controller", 00:12:37.234 "req_id": 1 00:12:37.234 } 00:12:37.234 Got JSON-RPC error response 00:12:37.234 response: 00:12:37.234 { 00:12:37.234 "code": -5, 00:12:37.234 "message": "Input/output error" 00:12:37.234 } 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72744 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72744 ']' 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72744 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72744 00:12:37.234 killing process with pid 72744 00:12:37.234 Received shutdown signal, test time was about 10.000000 seconds 00:12:37.234 00:12:37.234 Latency(us) 00:12:37.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.234 =================================================================================================================== 00:12:37.234 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72744' 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72744 00:12:37.234 [2024-07-12 16:16:20.871346] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:37.234 16:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72744 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AyhhWNJxxq 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AyhhWNJxxq 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:37.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AyhhWNJxxq 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AyhhWNJxxq' 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72763 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72763 /var/tmp/bdevperf.sock 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72763 ']' 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:37.494 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:37.494 [2024-07-12 16:16:21.072271] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:37.494 [2024-07-12 16:16:21.072363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72763 ] 00:12:37.494 [2024-07-12 16:16:21.209990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.753 [2024-07-12 16:16:21.263607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.753 [2024-07-12 16:16:21.291840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:37.753 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.753 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:37.753 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.AyhhWNJxxq 00:12:38.012 [2024-07-12 16:16:21.556826] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:38.012 [2024-07-12 16:16:21.557008] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:38.012 [2024-07-12 16:16:21.563045] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:38.012 [2024-07-12 16:16:21.563081] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:38.012 [2024-07-12 16:16:21.563144] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:38.012 [2024-07-12 16:16:21.563589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b110a0 (107): Transport endpoint is not connected 00:12:38.012 [2024-07-12 16:16:21.564572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b110a0 (9): Bad file descriptor 00:12:38.012 [2024-07-12 16:16:21.565568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:38.012 [2024-07-12 16:16:21.565606] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:38.012 [2024-07-12 16:16:21.565645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:38.012 request: 00:12:38.012 { 00:12:38.012 "name": "TLSTEST", 00:12:38.012 "trtype": "tcp", 00:12:38.012 "traddr": "10.0.0.2", 00:12:38.012 "adrfam": "ipv4", 00:12:38.012 "trsvcid": "4420", 00:12:38.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.012 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:38.012 "prchk_reftag": false, 00:12:38.012 "prchk_guard": false, 00:12:38.012 "hdgst": false, 00:12:38.012 "ddgst": false, 00:12:38.012 "psk": "/tmp/tmp.AyhhWNJxxq", 00:12:38.012 "method": "bdev_nvme_attach_controller", 00:12:38.012 "req_id": 1 00:12:38.012 } 00:12:38.012 Got JSON-RPC error response 00:12:38.012 response: 00:12:38.012 { 00:12:38.012 "code": -5, 00:12:38.012 "message": "Input/output error" 00:12:38.012 } 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72763 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72763 ']' 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72763 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72763 00:12:38.013 killing process with pid 72763 00:12:38.013 Received shutdown signal, test time was about 10.000000 seconds 00:12:38.013 00:12:38.013 Latency(us) 00:12:38.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.013 =================================================================================================================== 00:12:38.013 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72763' 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72763 00:12:38.013 [2024-07-12 16:16:21.609474] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:38.013 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72763 00:12:38.272 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:38.272 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:38.272 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.272 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.272 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AyhhWNJxxq 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AyhhWNJxxq 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AyhhWNJxxq 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AyhhWNJxxq' 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72779 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72779 /var/tmp/bdevperf.sock 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72779 ']' 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.273 16:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.273 [2024-07-12 16:16:21.807232] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:38.273 [2024-07-12 16:16:21.807328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72779 ] 00:12:38.273 [2024-07-12 16:16:21.939031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.273 [2024-07-12 16:16:21.991323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.532 [2024-07-12 16:16:22.020876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:39.100 16:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.100 16:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:39.100 16:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AyhhWNJxxq 00:12:39.356 [2024-07-12 16:16:23.030386] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:39.356 [2024-07-12 16:16:23.031007] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:39.356 [2024-07-12 16:16:23.036352] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:39.356 [2024-07-12 16:16:23.036600] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:39.356 [2024-07-12 16:16:23.036813] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:39.356 [2024-07-12 16:16:23.037150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb400a0 (107): Transport endpoint is not connected 00:12:39.356 [2024-07-12 16:16:23.038054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb400a0 (9): Bad file descriptor 00:12:39.356 [2024-07-12 16:16:23.039048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:39.356 [2024-07-12 16:16:23.039081] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:39.356 [2024-07-12 16:16:23.039097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:39.356 request: 00:12:39.356 { 00:12:39.356 "name": "TLSTEST", 00:12:39.356 "trtype": "tcp", 00:12:39.356 "traddr": "10.0.0.2", 00:12:39.356 "adrfam": "ipv4", 00:12:39.356 "trsvcid": "4420", 00:12:39.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:39.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.356 "prchk_reftag": false, 00:12:39.356 "prchk_guard": false, 00:12:39.356 "hdgst": false, 00:12:39.356 "ddgst": false, 00:12:39.356 "psk": "/tmp/tmp.AyhhWNJxxq", 00:12:39.356 "method": "bdev_nvme_attach_controller", 00:12:39.356 "req_id": 1 00:12:39.356 } 00:12:39.356 Got JSON-RPC error response 00:12:39.356 response: 00:12:39.356 { 00:12:39.356 "code": -5, 00:12:39.356 "message": "Input/output error" 00:12:39.356 } 00:12:39.356 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72779 00:12:39.356 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72779 ']' 00:12:39.356 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72779 00:12:39.356 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:39.356 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.356 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72779 00:12:39.613 killing process with pid 72779 00:12:39.613 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.613 00:12:39.613 Latency(us) 00:12:39.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.613 =================================================================================================================== 00:12:39.613 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:39.613 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:39.613 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:39.613 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72779' 00:12:39.613 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72779 00:12:39.614 [2024-07-12 16:16:23.086092] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72779 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72805 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72805 /var/tmp/bdevperf.sock 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72805 ']' 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:39.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.614 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:39.614 [2024-07-12 16:16:23.291786] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:39.614 [2024-07-12 16:16:23.292045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:12:39.871 [2024-07-12 16:16:23.432666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.871 [2024-07-12 16:16:23.494540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.871 [2024-07-12 16:16:23.527597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:39.871 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.871 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:39.871 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:40.128 [2024-07-12 16:16:23.843462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:40.128 [2024-07-12 16:16:23.845605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12558f0 (9): Bad file descriptor 00:12:40.128 [2024-07-12 16:16:23.846601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:40.128 [2024-07-12 16:16:23.847176] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:40.128 request: 00:12:40.128 { 00:12:40.128 "name": "TLSTEST", 00:12:40.128 "trtype": "tcp", 00:12:40.128 "traddr": "10.0.0.2", 00:12:40.128 "adrfam": "ipv4", 00:12:40.128 "trsvcid": "4420", 00:12:40.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.128 "prchk_reftag": false, 00:12:40.128 "prchk_guard": false, 00:12:40.128 "hdgst": false, 00:12:40.128 "ddgst": false, 00:12:40.128 "method": "bdev_nvme_attach_controller", 00:12:40.128 "req_id": 1 00:12:40.128 } 00:12:40.128 Got JSON-RPC error response 00:12:40.128 response: 00:12:40.128 { 00:12:40.128 "code": -5, 00:12:40.128 "message": "Input/output error" 00:12:40.128 } 00:12:40.128 [2024-07-12 16:16:23.847398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72805 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72805 ']' 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72805 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72805 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72805' 00:12:40.386 killing process with pid 72805 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72805 00:12:40.386 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.386 00:12:40.386 Latency(us) 00:12:40.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.386 =================================================================================================================== 00:12:40.386 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:40.386 16:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72805 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72383 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72383 ']' 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72383 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72383 00:12:40.386 killing process with pid 72383 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72383' 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72383 00:12:40.386 [2024-07-12 16:16:24.069258] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:40.386 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72383 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.RMDll0bN69 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.RMDll0bN69 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72831 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72831 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72831 ']' 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.644 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.644 [2024-07-12 16:16:24.343164] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:40.644 [2024-07-12 16:16:24.343235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.909 [2024-07-12 16:16:24.473337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.909 [2024-07-12 16:16:24.529449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.909 [2024-07-12 16:16:24.529501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.909 [2024-07-12 16:16:24.529511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.909 [2024-07-12 16:16:24.529519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.909 [2024-07-12 16:16:24.529526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.909 [2024-07-12 16:16:24.529549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.909 [2024-07-12 16:16:24.558012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:40.909 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.909 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:40.909 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.909 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:40.909 16:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.184 16:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.184 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.RMDll0bN69 00:12:41.184 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RMDll0bN69 00:12:41.184 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:41.184 [2024-07-12 16:16:24.855036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.184 16:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:41.442 16:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:41.700 [2024-07-12 16:16:25.367177] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:41.700 [2024-07-12 16:16:25.367388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.700 16:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:41.959 malloc0 00:12:41.959 16:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:42.217 16:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:12:42.475 [2024-07-12 16:16:26.085995] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RMDll0bN69 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RMDll0bN69' 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72878 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72878 /var/tmp/bdevperf.sock 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72878 ']' 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.475 16:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.475 [2024-07-12 16:16:26.159501] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:42.475 [2024-07-12 16:16:26.159759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72878 ] 00:12:42.734 [2024-07-12 16:16:26.300380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.734 [2024-07-12 16:16:26.372634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.734 [2024-07-12 16:16:26.407467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:43.668 16:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.668 16:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:43.668 16:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:12:43.668 [2024-07-12 16:16:27.325026] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:43.668 [2024-07-12 16:16:27.325771] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:43.926 TLSTESTn1 00:12:43.926 16:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:43.926 Running I/O for 10 seconds... 00:12:53.904 00:12:53.904 Latency(us) 00:12:53.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.904 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:53.904 Verification LBA range: start 0x0 length 0x2000 00:12:53.904 TLSTESTn1 : 10.01 4428.58 17.30 0.00 0.00 28850.05 5779.08 27167.65 00:12:53.904 =================================================================================================================== 00:12:53.904 Total : 4428.58 17.30 0.00 0.00 28850.05 5779.08 27167.65 00:12:53.904 0 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72878 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72878 ']' 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72878 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72878 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72878' 00:12:53.904 killing process with pid 72878 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72878 00:12:53.904 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.904 00:12:53.904 Latency(us) 00:12:53.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.904 =================================================================================================================== 00:12:53.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.904 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72878 00:12:53.904 [2024-07-12 16:16:37.585684] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.RMDll0bN69 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RMDll0bN69 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RMDll0bN69 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RMDll0bN69 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RMDll0bN69' 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73014 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73014 /var/tmp/bdevperf.sock 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73014 ']' 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.163 16:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.163 [2024-07-12 16:16:37.810755] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:54.163 [2024-07-12 16:16:37.811063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73014 ] 00:12:54.422 [2024-07-12 16:16:37.946076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.422 [2024-07-12 16:16:38.005185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.422 [2024-07-12 16:16:38.037094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:12:55.358 [2024-07-12 16:16:38.959669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:55.358 [2024-07-12 16:16:38.960292] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:55.358 [2024-07-12 16:16:38.960557] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.RMDll0bN69 00:12:55.358 request: 00:12:55.358 { 00:12:55.358 "name": "TLSTEST", 00:12:55.358 "trtype": "tcp", 00:12:55.358 "traddr": "10.0.0.2", 00:12:55.358 "adrfam": "ipv4", 00:12:55.358 "trsvcid": "4420", 00:12:55.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:55.358 "prchk_reftag": false, 00:12:55.358 "prchk_guard": false, 00:12:55.358 "hdgst": false, 00:12:55.358 "ddgst": false, 00:12:55.358 "psk": "/tmp/tmp.RMDll0bN69", 00:12:55.358 "method": "bdev_nvme_attach_controller", 00:12:55.358 "req_id": 1 00:12:55.358 } 00:12:55.358 Got JSON-RPC error response 00:12:55.358 response: 00:12:55.358 { 00:12:55.358 "code": -1, 00:12:55.358 "message": "Operation not permitted" 00:12:55.358 } 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73014 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73014 ']' 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73014 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.358 16:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73014 00:12:55.358 killing process with pid 73014 00:12:55.358 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.358 00:12:55.358 Latency(us) 00:12:55.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.358 =================================================================================================================== 00:12:55.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.358 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:55.358 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:55.358 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73014' 00:12:55.358 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73014 00:12:55.358 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73014 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 72831 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72831 ']' 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72831 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72831 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72831' 00:12:55.617 killing process with pid 72831 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72831 00:12:55.617 [2024-07-12 16:16:39.205628] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:55.617 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72831 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73046 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73046 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73046 ']' 00:12:55.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.876 16:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.876 [2024-07-12 16:16:39.429267] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:55.876 [2024-07-12 16:16:39.429594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.876 [2024-07-12 16:16:39.564106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.135 [2024-07-12 16:16:39.630494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.135 [2024-07-12 16:16:39.630766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.135 [2024-07-12 16:16:39.630802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.135 [2024-07-12 16:16:39.630810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.135 [2024-07-12 16:16:39.630816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.135 [2024-07-12 16:16:39.630847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.135 [2024-07-12 16:16:39.659546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.RMDll0bN69 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.RMDll0bN69 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:56.702 16:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.RMDll0bN69 00:12:56.703 16:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RMDll0bN69 00:12:56.703 16:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:56.961 [2024-07-12 16:16:40.656353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.961 16:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:57.219 16:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:57.477 [2024-07-12 16:16:41.124423] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:57.477 [2024-07-12 16:16:41.124714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.477 16:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:57.735 malloc0 00:12:57.735 16:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:57.994 16:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:12:58.253 [2024-07-12 16:16:41.823016] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:58.253 [2024-07-12 16:16:41.823055] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:58.253 [2024-07-12 16:16:41.823087] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:58.253 request: 00:12:58.253 { 00:12:58.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.253 "host": "nqn.2016-06.io.spdk:host1", 00:12:58.253 "psk": "/tmp/tmp.RMDll0bN69", 00:12:58.253 "method": "nvmf_subsystem_add_host", 00:12:58.253 "req_id": 1 00:12:58.253 } 00:12:58.253 Got JSON-RPC error response 00:12:58.253 response: 00:12:58.253 { 00:12:58.253 "code": -32603, 00:12:58.253 "message": "Internal error" 00:12:58.253 } 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73046 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73046 ']' 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73046 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73046 00:12:58.253 killing process with pid 73046 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73046' 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73046 00:12:58.253 16:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73046 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.RMDll0bN69 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73109 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73109 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73109 ']' 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.512 16:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.512 [2024-07-12 16:16:42.094728] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:12:58.512 [2024-07-12 16:16:42.095064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.512 [2024-07-12 16:16:42.231441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.771 [2024-07-12 16:16:42.287577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.771 [2024-07-12 16:16:42.287892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.771 [2024-07-12 16:16:42.288033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.771 [2024-07-12 16:16:42.288175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.771 [2024-07-12 16:16:42.288217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.771 [2024-07-12 16:16:42.288321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.771 [2024-07-12 16:16:42.316012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.337 16:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.337 16:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:59.337 16:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.337 16:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.337 16:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 16:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.596 16:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.RMDll0bN69 00:12:59.596 16:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RMDll0bN69 00:12:59.596 16:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:59.855 [2024-07-12 16:16:43.333575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.855 16:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:59.855 16:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:00.114 [2024-07-12 16:16:43.821731] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:00.114 [2024-07-12 16:16:43.822189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.373 16:16:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:00.373 malloc0 00:13:00.373 16:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.632 16:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:13:00.890 [2024-07-12 16:16:44.476667] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:00.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73158 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73158 /var/tmp/bdevperf.sock 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73158 ']' 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.890 16:16:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.890 [2024-07-12 16:16:44.537133] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:00.891 [2024-07-12 16:16:44.537928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73158 ] 00:13:01.149 [2024-07-12 16:16:44.676060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.149 [2024-07-12 16:16:44.748977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.149 [2024-07-12 16:16:44.783970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:01.740 16:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.740 16:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:01.740 16:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:13:01.999 [2024-07-12 16:16:45.641758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:01.999 [2024-07-12 16:16:45.641950] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:01.999 TLSTESTn1 00:13:02.258 16:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:02.518 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:02.518 "subsystems": [ 00:13:02.518 { 00:13:02.518 "subsystem": "keyring", 00:13:02.518 "config": [] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "iobuf", 00:13:02.518 "config": [ 00:13:02.518 { 00:13:02.518 "method": "iobuf_set_options", 00:13:02.518 "params": { 00:13:02.518 "small_pool_count": 8192, 00:13:02.518 "large_pool_count": 1024, 00:13:02.518 "small_bufsize": 8192, 00:13:02.518 "large_bufsize": 135168 00:13:02.518 } 00:13:02.518 } 00:13:02.518 ] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "sock", 00:13:02.518 "config": [ 00:13:02.518 { 00:13:02.518 "method": "sock_set_default_impl", 00:13:02.518 "params": { 00:13:02.518 "impl_name": "uring" 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "sock_impl_set_options", 00:13:02.518 "params": { 00:13:02.518 "impl_name": "ssl", 00:13:02.518 "recv_buf_size": 4096, 00:13:02.518 "send_buf_size": 4096, 00:13:02.518 "enable_recv_pipe": true, 00:13:02.518 "enable_quickack": false, 00:13:02.518 "enable_placement_id": 0, 00:13:02.518 "enable_zerocopy_send_server": true, 00:13:02.518 "enable_zerocopy_send_client": false, 00:13:02.518 "zerocopy_threshold": 0, 00:13:02.518 "tls_version": 0, 00:13:02.518 "enable_ktls": false 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "sock_impl_set_options", 00:13:02.518 "params": { 00:13:02.518 "impl_name": "posix", 00:13:02.518 "recv_buf_size": 2097152, 00:13:02.518 "send_buf_size": 2097152, 00:13:02.518 "enable_recv_pipe": true, 00:13:02.518 "enable_quickack": false, 00:13:02.518 "enable_placement_id": 0, 00:13:02.518 "enable_zerocopy_send_server": true, 00:13:02.518 "enable_zerocopy_send_client": false, 00:13:02.518 "zerocopy_threshold": 0, 00:13:02.518 "tls_version": 0, 00:13:02.518 "enable_ktls": false 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "sock_impl_set_options", 00:13:02.518 "params": { 00:13:02.518 "impl_name": "uring", 00:13:02.518 "recv_buf_size": 2097152, 00:13:02.518 "send_buf_size": 2097152, 00:13:02.518 "enable_recv_pipe": true, 00:13:02.518 "enable_quickack": false, 00:13:02.518 "enable_placement_id": 0, 00:13:02.518 "enable_zerocopy_send_server": false, 00:13:02.518 "enable_zerocopy_send_client": false, 00:13:02.518 "zerocopy_threshold": 0, 00:13:02.518 "tls_version": 0, 00:13:02.518 "enable_ktls": false 00:13:02.518 } 00:13:02.518 } 00:13:02.518 ] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "vmd", 00:13:02.518 "config": [] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "accel", 00:13:02.518 "config": [ 00:13:02.518 { 00:13:02.518 "method": "accel_set_options", 00:13:02.518 "params": { 00:13:02.518 "small_cache_size": 128, 00:13:02.518 "large_cache_size": 16, 00:13:02.518 "task_count": 2048, 00:13:02.518 "sequence_count": 2048, 00:13:02.518 "buf_count": 2048 00:13:02.518 } 00:13:02.518 } 00:13:02.518 ] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "bdev", 00:13:02.518 "config": [ 00:13:02.518 { 00:13:02.518 "method": "bdev_set_options", 00:13:02.518 "params": { 00:13:02.518 "bdev_io_pool_size": 65535, 00:13:02.518 "bdev_io_cache_size": 256, 00:13:02.518 "bdev_auto_examine": true, 00:13:02.518 "iobuf_small_cache_size": 128, 00:13:02.518 "iobuf_large_cache_size": 16 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "bdev_raid_set_options", 00:13:02.518 "params": { 00:13:02.518 "process_window_size_kb": 1024 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "bdev_iscsi_set_options", 00:13:02.518 "params": { 00:13:02.518 "timeout_sec": 30 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "bdev_nvme_set_options", 00:13:02.518 "params": { 00:13:02.518 "action_on_timeout": "none", 00:13:02.518 "timeout_us": 0, 00:13:02.518 "timeout_admin_us": 0, 00:13:02.518 "keep_alive_timeout_ms": 10000, 00:13:02.518 "arbitration_burst": 0, 00:13:02.518 "low_priority_weight": 0, 00:13:02.518 "medium_priority_weight": 0, 00:13:02.518 "high_priority_weight": 0, 00:13:02.518 "nvme_adminq_poll_period_us": 10000, 00:13:02.518 "nvme_ioq_poll_period_us": 0, 00:13:02.518 "io_queue_requests": 0, 00:13:02.518 "delay_cmd_submit": true, 00:13:02.518 "transport_retry_count": 4, 00:13:02.518 "bdev_retry_count": 3, 00:13:02.518 "transport_ack_timeout": 0, 00:13:02.518 "ctrlr_loss_timeout_sec": 0, 00:13:02.518 "reconnect_delay_sec": 0, 00:13:02.518 "fast_io_fail_timeout_sec": 0, 00:13:02.518 "disable_auto_failback": false, 00:13:02.518 "generate_uuids": false, 00:13:02.518 "transport_tos": 0, 00:13:02.518 "nvme_error_stat": false, 00:13:02.518 "rdma_srq_size": 0, 00:13:02.518 "io_path_stat": false, 00:13:02.518 "allow_accel_sequence": false, 00:13:02.518 "rdma_max_cq_size": 0, 00:13:02.518 "rdma_cm_event_timeout_ms": 0, 00:13:02.518 "dhchap_digests": [ 00:13:02.518 "sha256", 00:13:02.518 "sha384", 00:13:02.518 "sha512" 00:13:02.518 ], 00:13:02.518 "dhchap_dhgroups": [ 00:13:02.518 "null", 00:13:02.518 "ffdhe2048", 00:13:02.518 "ffdhe3072", 00:13:02.518 "ffdhe4096", 00:13:02.518 "ffdhe6144", 00:13:02.518 "ffdhe8192" 00:13:02.518 ] 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "bdev_nvme_set_hotplug", 00:13:02.518 "params": { 00:13:02.518 "period_us": 100000, 00:13:02.518 "enable": false 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "bdev_malloc_create", 00:13:02.518 "params": { 00:13:02.518 "name": "malloc0", 00:13:02.518 "num_blocks": 8192, 00:13:02.518 "block_size": 4096, 00:13:02.518 "physical_block_size": 4096, 00:13:02.518 "uuid": "253cba7a-0140-4e02-b9bc-b866b684e195", 00:13:02.518 "optimal_io_boundary": 0 00:13:02.518 } 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "method": "bdev_wait_for_examine" 00:13:02.518 } 00:13:02.518 ] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "nbd", 00:13:02.518 "config": [] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "scheduler", 00:13:02.518 "config": [ 00:13:02.518 { 00:13:02.518 "method": "framework_set_scheduler", 00:13:02.518 "params": { 00:13:02.518 "name": "static" 00:13:02.518 } 00:13:02.518 } 00:13:02.518 ] 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "subsystem": "nvmf", 00:13:02.518 "config": [ 00:13:02.518 { 00:13:02.518 "method": "nvmf_set_config", 00:13:02.518 "params": { 00:13:02.519 "discovery_filter": "match_any", 00:13:02.519 "admin_cmd_passthru": { 00:13:02.519 "identify_ctrlr": false 00:13:02.519 } 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_set_max_subsystems", 00:13:02.519 "params": { 00:13:02.519 "max_subsystems": 1024 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_set_crdt", 00:13:02.519 "params": { 00:13:02.519 "crdt1": 0, 00:13:02.519 "crdt2": 0, 00:13:02.519 "crdt3": 0 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_create_transport", 00:13:02.519 "params": { 00:13:02.519 "trtype": "TCP", 00:13:02.519 "max_queue_depth": 128, 00:13:02.519 "max_io_qpairs_per_ctrlr": 127, 00:13:02.519 "in_capsule_data_size": 4096, 00:13:02.519 "max_io_size": 131072, 00:13:02.519 "io_unit_size": 131072, 00:13:02.519 "max_aq_depth": 128, 00:13:02.519 "num_shared_buffers": 511, 00:13:02.519 "buf_cache_size": 4294967295, 00:13:02.519 "dif_insert_or_strip": false, 00:13:02.519 "zcopy": false, 00:13:02.519 "c2h_success": false, 00:13:02.519 "sock_priority": 0, 00:13:02.519 "abort_timeout_sec": 1, 00:13:02.519 "ack_timeout": 0, 00:13:02.519 "data_wr_pool_size": 0 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_create_subsystem", 00:13:02.519 "params": { 00:13:02.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.519 "allow_any_host": false, 00:13:02.519 "serial_number": "SPDK00000000000001", 00:13:02.519 "model_number": "SPDK bdev Controller", 00:13:02.519 "max_namespaces": 10, 00:13:02.519 "min_cntlid": 1, 00:13:02.519 "max_cntlid": 65519, 00:13:02.519 "ana_reporting": false 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_subsystem_add_host", 00:13:02.519 "params": { 00:13:02.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.519 "host": "nqn.2016-06.io.spdk:host1", 00:13:02.519 "psk": "/tmp/tmp.RMDll0bN69" 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_subsystem_add_ns", 00:13:02.519 "params": { 00:13:02.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.519 "namespace": { 00:13:02.519 "nsid": 1, 00:13:02.519 "bdev_name": "malloc0", 00:13:02.519 "nguid": "253CBA7A01404E02B9BCB866B684E195", 00:13:02.519 "uuid": "253cba7a-0140-4e02-b9bc-b866b684e195", 00:13:02.519 "no_auto_visible": false 00:13:02.519 } 00:13:02.519 } 00:13:02.519 }, 00:13:02.519 { 00:13:02.519 "method": "nvmf_subsystem_add_listener", 00:13:02.519 "params": { 00:13:02.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.519 "listen_address": { 00:13:02.519 "trtype": "TCP", 00:13:02.519 "adrfam": "IPv4", 00:13:02.519 "traddr": "10.0.0.2", 00:13:02.519 "trsvcid": "4420" 00:13:02.519 }, 00:13:02.519 "secure_channel": true 00:13:02.519 } 00:13:02.519 } 00:13:02.519 ] 00:13:02.519 } 00:13:02.519 ] 00:13:02.519 }' 00:13:02.519 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:02.779 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:02.779 "subsystems": [ 00:13:02.779 { 00:13:02.779 "subsystem": "keyring", 00:13:02.779 "config": [] 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "subsystem": "iobuf", 00:13:02.779 "config": [ 00:13:02.779 { 00:13:02.779 "method": "iobuf_set_options", 00:13:02.779 "params": { 00:13:02.779 "small_pool_count": 8192, 00:13:02.779 "large_pool_count": 1024, 00:13:02.779 "small_bufsize": 8192, 00:13:02.779 "large_bufsize": 135168 00:13:02.779 } 00:13:02.779 } 00:13:02.779 ] 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "subsystem": "sock", 00:13:02.779 "config": [ 00:13:02.779 { 00:13:02.779 "method": "sock_set_default_impl", 00:13:02.779 "params": { 00:13:02.779 "impl_name": "uring" 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "sock_impl_set_options", 00:13:02.779 "params": { 00:13:02.779 "impl_name": "ssl", 00:13:02.779 "recv_buf_size": 4096, 00:13:02.779 "send_buf_size": 4096, 00:13:02.779 "enable_recv_pipe": true, 00:13:02.779 "enable_quickack": false, 00:13:02.779 "enable_placement_id": 0, 00:13:02.779 "enable_zerocopy_send_server": true, 00:13:02.779 "enable_zerocopy_send_client": false, 00:13:02.779 "zerocopy_threshold": 0, 00:13:02.779 "tls_version": 0, 00:13:02.779 "enable_ktls": false 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "sock_impl_set_options", 00:13:02.779 "params": { 00:13:02.779 "impl_name": "posix", 00:13:02.779 "recv_buf_size": 2097152, 00:13:02.779 "send_buf_size": 2097152, 00:13:02.779 "enable_recv_pipe": true, 00:13:02.779 "enable_quickack": false, 00:13:02.779 "enable_placement_id": 0, 00:13:02.779 "enable_zerocopy_send_server": true, 00:13:02.779 "enable_zerocopy_send_client": false, 00:13:02.779 "zerocopy_threshold": 0, 00:13:02.779 "tls_version": 0, 00:13:02.779 "enable_ktls": false 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "sock_impl_set_options", 00:13:02.779 "params": { 00:13:02.779 "impl_name": "uring", 00:13:02.779 "recv_buf_size": 2097152, 00:13:02.779 "send_buf_size": 2097152, 00:13:02.779 "enable_recv_pipe": true, 00:13:02.779 "enable_quickack": false, 00:13:02.779 "enable_placement_id": 0, 00:13:02.779 "enable_zerocopy_send_server": false, 00:13:02.779 "enable_zerocopy_send_client": false, 00:13:02.779 "zerocopy_threshold": 0, 00:13:02.779 "tls_version": 0, 00:13:02.779 "enable_ktls": false 00:13:02.779 } 00:13:02.779 } 00:13:02.779 ] 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "subsystem": "vmd", 00:13:02.779 "config": [] 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "subsystem": "accel", 00:13:02.779 "config": [ 00:13:02.779 { 00:13:02.779 "method": "accel_set_options", 00:13:02.779 "params": { 00:13:02.779 "small_cache_size": 128, 00:13:02.779 "large_cache_size": 16, 00:13:02.779 "task_count": 2048, 00:13:02.779 "sequence_count": 2048, 00:13:02.779 "buf_count": 2048 00:13:02.779 } 00:13:02.779 } 00:13:02.779 ] 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "subsystem": "bdev", 00:13:02.779 "config": [ 00:13:02.779 { 00:13:02.779 "method": "bdev_set_options", 00:13:02.779 "params": { 00:13:02.779 "bdev_io_pool_size": 65535, 00:13:02.779 "bdev_io_cache_size": 256, 00:13:02.779 "bdev_auto_examine": true, 00:13:02.779 "iobuf_small_cache_size": 128, 00:13:02.779 "iobuf_large_cache_size": 16 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "bdev_raid_set_options", 00:13:02.779 "params": { 00:13:02.779 "process_window_size_kb": 1024 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "bdev_iscsi_set_options", 00:13:02.779 "params": { 00:13:02.779 "timeout_sec": 30 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "bdev_nvme_set_options", 00:13:02.779 "params": { 00:13:02.779 "action_on_timeout": "none", 00:13:02.779 "timeout_us": 0, 00:13:02.779 "timeout_admin_us": 0, 00:13:02.779 "keep_alive_timeout_ms": 10000, 00:13:02.779 "arbitration_burst": 0, 00:13:02.779 "low_priority_weight": 0, 00:13:02.779 "medium_priority_weight": 0, 00:13:02.779 "high_priority_weight": 0, 00:13:02.779 "nvme_adminq_poll_period_us": 10000, 00:13:02.779 "nvme_ioq_poll_period_us": 0, 00:13:02.779 "io_queue_requests": 512, 00:13:02.779 "delay_cmd_submit": true, 00:13:02.779 "transport_retry_count": 4, 00:13:02.779 "bdev_retry_count": 3, 00:13:02.779 "transport_ack_timeout": 0, 00:13:02.779 "ctrlr_loss_timeout_sec": 0, 00:13:02.779 "reconnect_delay_sec": 0, 00:13:02.779 "fast_io_fail_timeout_sec": 0, 00:13:02.779 "disable_auto_failback": false, 00:13:02.779 "generate_uuids": false, 00:13:02.779 "transport_tos": 0, 00:13:02.779 "nvme_error_stat": false, 00:13:02.779 "rdma_srq_size": 0, 00:13:02.779 "io_path_stat": false, 00:13:02.779 "allow_accel_sequence": false, 00:13:02.779 "rdma_max_cq_size": 0, 00:13:02.779 "rdma_cm_event_timeout_ms": 0, 00:13:02.779 "dhchap_digests": [ 00:13:02.779 "sha256", 00:13:02.779 "sha384", 00:13:02.779 "sha512" 00:13:02.779 ], 00:13:02.779 "dhchap_dhgroups": [ 00:13:02.779 "null", 00:13:02.779 "ffdhe2048", 00:13:02.779 "ffdhe3072", 00:13:02.779 "ffdhe4096", 00:13:02.779 "ffdhe6144", 00:13:02.779 "ffdhe8192" 00:13:02.779 ] 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "bdev_nvme_attach_controller", 00:13:02.779 "params": { 00:13:02.779 "name": "TLSTEST", 00:13:02.779 "trtype": "TCP", 00:13:02.779 "adrfam": "IPv4", 00:13:02.779 "traddr": "10.0.0.2", 00:13:02.779 "trsvcid": "4420", 00:13:02.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.779 "prchk_reftag": false, 00:13:02.779 "prchk_guard": false, 00:13:02.779 "ctrlr_loss_timeout_sec": 0, 00:13:02.779 "reconnect_delay_sec": 0, 00:13:02.779 "fast_io_fail_timeout_sec": 0, 00:13:02.779 "psk": "/tmp/tmp.RMDll0bN69", 00:13:02.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.779 "hdgst": false, 00:13:02.779 "ddgst": false 00:13:02.779 } 00:13:02.779 }, 00:13:02.779 { 00:13:02.779 "method": "bdev_nvme_set_hotplug", 00:13:02.780 "params": { 00:13:02.780 "period_us": 100000, 00:13:02.780 "enable": false 00:13:02.780 } 00:13:02.780 }, 00:13:02.780 { 00:13:02.780 "method": "bdev_wait_for_examine" 00:13:02.780 } 00:13:02.780 ] 00:13:02.780 }, 00:13:02.780 { 00:13:02.780 "subsystem": "nbd", 00:13:02.780 "config": [] 00:13:02.780 } 00:13:02.780 ] 00:13:02.780 }' 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73158 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73158 ']' 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73158 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73158 00:13:02.780 killing process with pid 73158 00:13:02.780 Received shutdown signal, test time was about 10.000000 seconds 00:13:02.780 00:13:02.780 Latency(us) 00:13:02.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.780 =================================================================================================================== 00:13:02.780 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73158' 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73158 00:13:02.780 [2024-07-12 16:16:46.359736] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:02.780 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73158 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73109 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73109 ']' 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73109 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73109 00:13:03.039 killing process with pid 73109 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73109' 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73109 00:13:03.039 [2024-07-12 16:16:46.540555] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73109 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.039 16:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:03.039 "subsystems": [ 00:13:03.039 { 00:13:03.039 "subsystem": "keyring", 00:13:03.039 "config": [] 00:13:03.039 }, 00:13:03.039 { 00:13:03.039 "subsystem": "iobuf", 00:13:03.039 "config": [ 00:13:03.039 { 00:13:03.039 "method": "iobuf_set_options", 00:13:03.039 "params": { 00:13:03.039 "small_pool_count": 8192, 00:13:03.039 "large_pool_count": 1024, 00:13:03.039 "small_bufsize": 8192, 00:13:03.040 "large_bufsize": 135168 00:13:03.040 } 00:13:03.040 } 00:13:03.040 ] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "sock", 00:13:03.040 "config": [ 00:13:03.040 { 00:13:03.040 "method": "sock_set_default_impl", 00:13:03.040 "params": { 00:13:03.040 "impl_name": "uring" 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "sock_impl_set_options", 00:13:03.040 "params": { 00:13:03.040 "impl_name": "ssl", 00:13:03.040 "recv_buf_size": 4096, 00:13:03.040 "send_buf_size": 4096, 00:13:03.040 "enable_recv_pipe": true, 00:13:03.040 "enable_quickack": false, 00:13:03.040 "enable_placement_id": 0, 00:13:03.040 "enable_zerocopy_send_server": true, 00:13:03.040 "enable_zerocopy_send_client": false, 00:13:03.040 "zerocopy_threshold": 0, 00:13:03.040 "tls_version": 0, 00:13:03.040 "enable_ktls": false 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "sock_impl_set_options", 00:13:03.040 "params": { 00:13:03.040 "impl_name": "posix", 00:13:03.040 "recv_buf_size": 2097152, 00:13:03.040 "send_buf_size": 2097152, 00:13:03.040 "enable_recv_pipe": true, 00:13:03.040 "enable_quickack": false, 00:13:03.040 "enable_placement_id": 0, 00:13:03.040 "enable_zerocopy_send_server": true, 00:13:03.040 "enable_zerocopy_send_client": false, 00:13:03.040 "zerocopy_threshold": 0, 00:13:03.040 "tls_version": 0, 00:13:03.040 "enable_ktls": false 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "sock_impl_set_options", 00:13:03.040 "params": { 00:13:03.040 "impl_name": "uring", 00:13:03.040 "recv_buf_size": 2097152, 00:13:03.040 "send_buf_size": 2097152, 00:13:03.040 "enable_recv_pipe": true, 00:13:03.040 "enable_quickack": false, 00:13:03.040 "enable_placement_id": 0, 00:13:03.040 "enable_zerocopy_send_server": false, 00:13:03.040 "enable_zerocopy_send_client": false, 00:13:03.040 "zerocopy_threshold": 0, 00:13:03.040 "tls_version": 0, 00:13:03.040 "enable_ktls": false 00:13:03.040 } 00:13:03.040 } 00:13:03.040 ] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "vmd", 00:13:03.040 "config": [] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "accel", 00:13:03.040 "config": [ 00:13:03.040 { 00:13:03.040 "method": "accel_set_options", 00:13:03.040 "params": { 00:13:03.040 "small_cache_size": 128, 00:13:03.040 "large_cache_size": 16, 00:13:03.040 "task_count": 2048, 00:13:03.040 "sequence_count": 2048, 00:13:03.040 "buf_count": 2048 00:13:03.040 } 00:13:03.040 } 00:13:03.040 ] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "bdev", 00:13:03.040 "config": [ 00:13:03.040 { 00:13:03.040 "method": "bdev_set_options", 00:13:03.040 "params": { 00:13:03.040 "bdev_io_pool_size": 65535, 00:13:03.040 "bdev_io_cache_size": 256, 00:13:03.040 "bdev_auto_examine": true, 00:13:03.040 "iobuf_small_cache_size": 128, 00:13:03.040 "iobuf_large_cache_size": 16 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "bdev_raid_set_options", 00:13:03.040 "params": { 00:13:03.040 "process_window_size_kb": 1024 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "bdev_iscsi_set_options", 00:13:03.040 "params": { 00:13:03.040 "timeout_sec": 30 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "bdev_nvme_set_options", 00:13:03.040 "params": { 00:13:03.040 "action_on_timeout": "none", 00:13:03.040 "timeout_us": 0, 00:13:03.040 "timeout_admin_us": 0, 00:13:03.040 "keep_alive_timeout_ms": 10000, 00:13:03.040 "arbitration_burst": 0, 00:13:03.040 "low_priority_weight": 0, 00:13:03.040 "medium_priority_weight": 0, 00:13:03.040 "high_priority_weight": 0, 00:13:03.040 "nvme_adminq_poll_period_us": 10000, 00:13:03.040 "nvme_ioq_poll_period_us": 0, 00:13:03.040 "io_queue_requests": 0, 00:13:03.040 "delay_cmd_submit": true, 00:13:03.040 "transport_retry_count": 4, 00:13:03.040 "bdev_retry_count": 3, 00:13:03.040 "transport_ack_timeout": 0, 00:13:03.040 "ctrlr_loss_timeout_sec": 0, 00:13:03.040 "reconnect_delay_sec": 0, 00:13:03.040 "fast_io_fail_timeout_sec": 0, 00:13:03.040 "disable_auto_failback": false, 00:13:03.040 "generate_uuids": false, 00:13:03.040 "transport_tos": 0, 00:13:03.040 "nvme_error_stat": false, 00:13:03.040 "rdma_srq_size": 0, 00:13:03.040 "io_path_stat": false, 00:13:03.040 "allow_accel_sequence": false, 00:13:03.040 "rdma_max_cq_size": 0, 00:13:03.040 "rdma_cm_event_timeout_ms": 0, 00:13:03.040 "dhchap_digests": [ 00:13:03.040 "sha256", 00:13:03.040 "sha384", 00:13:03.040 "sha512" 00:13:03.040 ], 00:13:03.040 "dhchap_dhgroups": [ 00:13:03.040 "null", 00:13:03.040 "ffdhe2048", 00:13:03.040 "ffdhe3072", 00:13:03.040 "ffdhe4096", 00:13:03.040 "ffdhe6144", 00:13:03.040 "ffdhe8192" 00:13:03.040 ] 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "bdev_nvme_set_hotplug", 00:13:03.040 "params": { 00:13:03.040 "period_us": 100000, 00:13:03.040 "enable": false 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "bdev_malloc_create", 00:13:03.040 "params": { 00:13:03.040 "name": "malloc0", 00:13:03.040 "num_blocks": 8192, 00:13:03.040 "block_size": 4096, 00:13:03.040 "physical_block_size": 4096, 00:13:03.040 "uuid": "253cba7a-0140-4e02-b9bc-b866b684e195", 00:13:03.040 "optimal_io_boundary": 0 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "bdev_wait_for_examine" 00:13:03.040 } 00:13:03.040 ] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "nbd", 00:13:03.040 "config": [] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "scheduler", 00:13:03.040 "config": [ 00:13:03.040 { 00:13:03.040 "method": "framework_set_scheduler", 00:13:03.040 "params": { 00:13:03.040 "name": "static" 00:13:03.040 } 00:13:03.040 } 00:13:03.040 ] 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "subsystem": "nvmf", 00:13:03.040 "config": [ 00:13:03.040 { 00:13:03.040 "method": "nvmf_set_config", 00:13:03.040 "params": { 00:13:03.040 "discovery_filter": "match_any", 00:13:03.040 "admin_cmd_passthru": { 00:13:03.040 "identify_ctrlr": false 00:13:03.040 } 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "nvmf_set_max_subsystems", 00:13:03.040 "params": { 00:13:03.040 "max_subsystems": 1024 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "nvmf_set_crdt", 00:13:03.040 "params": { 00:13:03.040 "crdt1": 0, 00:13:03.040 "crdt2": 0, 00:13:03.040 "crdt3": 0 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "nvmf_create_transport", 00:13:03.040 "params": { 00:13:03.040 "trtype": "TCP", 00:13:03.040 "max_queue_depth": 128, 00:13:03.040 "max_io_qpairs_per_ctrlr": 127, 00:13:03.040 "in_capsule_data_size": 4096, 00:13:03.040 "max_io_size": 131072, 00:13:03.040 "io_unit_size": 131072, 00:13:03.040 "max_aq_depth": 128, 00:13:03.040 "num_shared_buffers": 511, 00:13:03.040 "buf_cache_size": 4294967295, 00:13:03.040 "dif_insert_or_strip": false, 00:13:03.040 "zcopy": false, 00:13:03.040 "c2h_success": false, 00:13:03.040 "sock_priority": 0, 00:13:03.040 "abort_timeout_sec": 1, 00:13:03.040 "ack_timeout": 0, 00:13:03.040 "data_wr_pool_size": 0 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "nvmf_create_subsystem", 00:13:03.040 "params": { 00:13:03.040 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.040 "allow_any_host": false, 00:13:03.040 "serial_number": "SPDK00000000000001", 00:13:03.040 "model_number": "SPDK bdev Controller", 00:13:03.040 "max_namespaces": 10, 00:13:03.040 "min_cntlid": 1, 00:13:03.040 "max_cntlid": 65519, 00:13:03.040 "ana_reporting": false 00:13:03.040 } 00:13:03.040 }, 00:13:03.040 { 00:13:03.040 "method": "nvmf_subsystem_add_host", 00:13:03.040 "params": { 00:13:03.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.041 "host": "nqn.2016-06.io.spdk:host1", 00:13:03.041 "psk": "/tmp/tmp.RMDll0bN69" 00:13:03.041 } 00:13:03.041 }, 00:13:03.041 { 00:13:03.041 "method": "nvmf_subsystem_add_ns", 00:13:03.041 "params": { 00:13:03.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.041 "namespace": { 00:13:03.041 "nsid": 1, 00:13:03.041 "bdev_name": "malloc0", 00:13:03.041 "nguid": "253CBA7A01404E02B9BCB866B684E195", 00:13:03.041 "uuid": "253cba7a-0140-4e02-b9bc-b866b684e195", 00:13:03.041 "no_auto_visible": false 00:13:03.041 } 00:13:03.041 } 00:13:03.041 }, 00:13:03.041 { 00:13:03.041 "method": "nvmf_subsystem_add_listener", 00:13:03.041 "params": { 00:13:03.041 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.041 "listen_address": { 00:13:03.041 "trtype": "TCP", 00:13:03.041 "adrfam": "IPv4", 00:13:03.041 "traddr": "10.0.0.2", 00:13:03.041 "trsvcid": "4420" 00:13:03.041 }, 00:13:03.041 "secure_channel": true 00:13:03.041 } 00:13:03.041 } 00:13:03.041 ] 00:13:03.041 } 00:13:03.041 ] 00:13:03.041 }' 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73201 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73201 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73201 ']' 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.041 16:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.041 [2024-07-12 16:16:46.765575] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:03.041 [2024-07-12 16:16:46.765684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.299 [2024-07-12 16:16:46.904720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.299 [2024-07-12 16:16:46.962197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.299 [2024-07-12 16:16:46.962263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.299 [2024-07-12 16:16:46.962291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.299 [2024-07-12 16:16:46.962299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.299 [2024-07-12 16:16:46.962306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.299 [2024-07-12 16:16:46.962382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.558 [2024-07-12 16:16:47.105074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:03.558 [2024-07-12 16:16:47.153281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.558 [2024-07-12 16:16:47.169201] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:03.558 [2024-07-12 16:16:47.185252] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:03.558 [2024-07-12 16:16:47.185464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73233 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73233 /var/tmp/bdevperf.sock 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73233 ']' 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:04.124 16:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:04.124 "subsystems": [ 00:13:04.124 { 00:13:04.124 "subsystem": "keyring", 00:13:04.124 "config": [] 00:13:04.124 }, 00:13:04.124 { 00:13:04.124 "subsystem": "iobuf", 00:13:04.124 "config": [ 00:13:04.124 { 00:13:04.124 "method": "iobuf_set_options", 00:13:04.124 "params": { 00:13:04.124 "small_pool_count": 8192, 00:13:04.124 "large_pool_count": 1024, 00:13:04.124 "small_bufsize": 8192, 00:13:04.124 "large_bufsize": 135168 00:13:04.124 } 00:13:04.125 } 00:13:04.125 ] 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "subsystem": "sock", 00:13:04.125 "config": [ 00:13:04.125 { 00:13:04.125 "method": "sock_set_default_impl", 00:13:04.125 "params": { 00:13:04.125 "impl_name": "uring" 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "sock_impl_set_options", 00:13:04.125 "params": { 00:13:04.125 "impl_name": "ssl", 00:13:04.125 "recv_buf_size": 4096, 00:13:04.125 "send_buf_size": 4096, 00:13:04.125 "enable_recv_pipe": true, 00:13:04.125 "enable_quickack": false, 00:13:04.125 "enable_placement_id": 0, 00:13:04.125 "enable_zerocopy_send_server": true, 00:13:04.125 "enable_zerocopy_send_client": false, 00:13:04.125 "zerocopy_threshold": 0, 00:13:04.125 "tls_version": 0, 00:13:04.125 "enable_ktls": false 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "sock_impl_set_options", 00:13:04.125 "params": { 00:13:04.125 "impl_name": "posix", 00:13:04.125 "recv_buf_size": 2097152, 00:13:04.125 "send_buf_size": 2097152, 00:13:04.125 "enable_recv_pipe": true, 00:13:04.125 "enable_quickack": false, 00:13:04.125 "enable_placement_id": 0, 00:13:04.125 "enable_zerocopy_send_server": true, 00:13:04.125 "enable_zerocopy_send_client": false, 00:13:04.125 "zerocopy_threshold": 0, 00:13:04.125 "tls_version": 0, 00:13:04.125 "enable_ktls": false 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "sock_impl_set_options", 00:13:04.125 "params": { 00:13:04.125 "impl_name": "uring", 00:13:04.125 "recv_buf_size": 2097152, 00:13:04.125 "send_buf_size": 2097152, 00:13:04.125 "enable_recv_pipe": true, 00:13:04.125 "enable_quickack": false, 00:13:04.125 "enable_placement_id": 0, 00:13:04.125 "enable_zerocopy_send_server": false, 00:13:04.125 "enable_zerocopy_send_client": false, 00:13:04.125 "zerocopy_threshold": 0, 00:13:04.125 "tls_version": 0, 00:13:04.125 "enable_ktls": false 00:13:04.125 } 00:13:04.125 } 00:13:04.125 ] 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "subsystem": "vmd", 00:13:04.125 "config": [] 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "subsystem": "accel", 00:13:04.125 "config": [ 00:13:04.125 { 00:13:04.125 "method": "accel_set_options", 00:13:04.125 "params": { 00:13:04.125 "small_cache_size": 128, 00:13:04.125 "large_cache_size": 16, 00:13:04.125 "task_count": 2048, 00:13:04.125 "sequence_count": 2048, 00:13:04.125 "buf_count": 2048 00:13:04.125 } 00:13:04.125 } 00:13:04.125 ] 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "subsystem": "bdev", 00:13:04.125 "config": [ 00:13:04.125 { 00:13:04.125 "method": "bdev_set_options", 00:13:04.125 "params": { 00:13:04.125 "bdev_io_pool_size": 65535, 00:13:04.125 "bdev_io_cache_size": 256, 00:13:04.125 "bdev_auto_examine": true, 00:13:04.125 "iobuf_small_cache_size": 128, 00:13:04.125 "iobuf_large_cache_size": 16 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "bdev_raid_set_options", 00:13:04.125 "params": { 00:13:04.125 "process_window_size_kb": 1024 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "bdev_iscsi_set_options", 00:13:04.125 "params": { 00:13:04.125 "timeout_sec": 30 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "bdev_nvme_set_options", 00:13:04.125 "params": { 00:13:04.125 "action_on_timeout": "none", 00:13:04.125 "timeout_us": 0, 00:13:04.125 "timeout_admin_us": 0, 00:13:04.125 "keep_alive_timeout_ms": 10000, 00:13:04.125 "arbitration_burst": 0, 00:13:04.125 "low_priority_weight": 0, 00:13:04.125 "medium_priority_weight": 0, 00:13:04.125 "high_priority_weight": 0, 00:13:04.125 "nvme_adminq_poll_period_us": 10000, 00:13:04.125 "nvme_ioq_poll_period_us": 0, 00:13:04.125 "io_queue_requests": 512, 00:13:04.125 "delay_cmd_submit": true, 00:13:04.125 "transport_retry_count": 4, 00:13:04.125 "bdev_retry_count": 3, 00:13:04.125 "transport_ack_timeout": 0, 00:13:04.125 "ctrlr_loss_timeout_sec": 0, 00:13:04.125 "reconnect_delay_sec": 0, 00:13:04.125 "fast_io_fail_timeout_sec": 0, 00:13:04.125 "disable_auto_failback": false, 00:13:04.125 "generate_uuids": false, 00:13:04.125 "transport_tos": 0, 00:13:04.125 "nvme_error_stat": false, 00:13:04.125 "rdma_srq_size": 0, 00:13:04.125 "io_path_stat": false, 00:13:04.125 "allow_accel_sequence": false, 00:13:04.125 "rdma_max_cq_size": 0, 00:13:04.125 "rdma_cm_event_timeout_ms": 0, 00:13:04.125 "dhchap_digests": [ 00:13:04.125 "sha256", 00:13:04.125 "sha384", 00:13:04.125 "sha512" 00:13:04.125 ], 00:13:04.125 "dhchap_dhgroups": [ 00:13:04.125 "null", 00:13:04.125 "ffdhe2048", 00:13:04.125 "ffdhe3072", 00:13:04.125 "ffdhe4096", 00:13:04.125 "ffdhe6144", 00:13:04.125 "ffdhe8192" 00:13:04.125 ] 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "bdev_nvme_attach_controller", 00:13:04.125 "params": { 00:13:04.125 "name": "TLSTEST", 00:13:04.125 "trtype": "TCP", 00:13:04.125 "adrfam": "IPv4", 00:13:04.125 "traddr": "10.0.0.2", 00:13:04.125 "trsvcid": "4420", 00:13:04.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.125 "prchk_reftag": false, 00:13:04.125 "prchk_guard": false, 00:13:04.125 "ctrlr_loss_timeout_sec": 0, 00:13:04.125 "reconnect_delay_sec": 0, 00:13:04.125 "fast_io_fail_timeout_sec": 0, 00:13:04.125 "psk": "/tmp/tmp.RMDll0bN69", 00:13:04.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:04.125 "hdgst": false, 00:13:04.125 "ddgst": false 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "bdev_nvme_set_hotplug", 00:13:04.125 "params": { 00:13:04.125 "period_us": 100000, 00:13:04.125 "enable": false 00:13:04.125 } 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "method": "bdev_wait_for_examine" 00:13:04.125 } 00:13:04.125 ] 00:13:04.125 }, 00:13:04.125 { 00:13:04.125 "subsystem": "nbd", 00:13:04.125 "config": [] 00:13:04.125 } 00:13:04.125 ] 00:13:04.125 }' 00:13:04.125 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.125 16:16:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.125 [2024-07-12 16:16:47.821028] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:04.125 [2024-07-12 16:16:47.821130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73233 ] 00:13:04.383 [2024-07-12 16:16:47.954997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.383 [2024-07-12 16:16:48.013173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.641 [2024-07-12 16:16:48.124716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:04.641 [2024-07-12 16:16:48.147724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:04.641 [2024-07-12 16:16:48.147867] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:05.207 16:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.207 16:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:05.207 16:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:05.207 Running I/O for 10 seconds... 00:13:15.178 00:13:15.178 Latency(us) 00:13:15.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:15.178 Verification LBA range: start 0x0 length 0x2000 00:13:15.178 TLSTESTn1 : 10.03 4134.09 16.15 0.00 0.00 30899.40 7626.01 21805.61 00:13:15.178 =================================================================================================================== 00:13:15.178 Total : 4134.09 16.15 0.00 0.00 30899.40 7626.01 21805.61 00:13:15.178 0 00:13:15.178 16:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.178 16:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73233 00:13:15.178 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73233 ']' 00:13:15.178 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73233 00:13:15.178 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73233 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:15.437 killing process with pid 73233 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73233' 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73233 00:13:15.437 Received shutdown signal, test time was about 10.000000 seconds 00:13:15.437 00:13:15.437 Latency(us) 00:13:15.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.437 =================================================================================================================== 00:13:15.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:15.437 [2024-07-12 16:16:58.923976] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:15.437 16:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73233 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73201 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73201 ']' 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73201 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73201 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:15.437 killing process with pid 73201 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73201' 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73201 00:13:15.437 [2024-07-12 16:16:59.108071] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:15.437 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73201 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73373 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73373 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73373 ']' 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.697 16:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.697 [2024-07-12 16:16:59.332434] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:15.697 [2024-07-12 16:16:59.332543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.956 [2024-07-12 16:16:59.472354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.956 [2024-07-12 16:16:59.542293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.956 [2024-07-12 16:16:59.542364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.956 [2024-07-12 16:16:59.542378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.956 [2024-07-12 16:16:59.542388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.956 [2024-07-12 16:16:59.542397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.956 [2024-07-12 16:16:59.542425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.956 [2024-07-12 16:16:59.578142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.RMDll0bN69 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RMDll0bN69 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:16.893 [2024-07-12 16:17:00.547608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.893 16:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:17.152 16:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:17.411 [2024-07-12 16:17:01.059744] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:17.411 [2024-07-12 16:17:01.060041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.411 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:17.669 malloc0 00:13:17.669 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:17.929 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RMDll0bN69 00:13:18.188 [2024-07-12 16:17:01.726560] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73422 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73422 /var/tmp/bdevperf.sock 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73422 ']' 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.188 16:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.188 [2024-07-12 16:17:01.798753] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:18.188 [2024-07-12 16:17:01.798888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73422 ] 00:13:18.446 [2024-07-12 16:17:01.938443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.446 [2024-07-12 16:17:02.008540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.446 [2024-07-12 16:17:02.040862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.014 16:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.014 16:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:19.014 16:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RMDll0bN69 00:13:19.284 16:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:19.546 [2024-07-12 16:17:03.136648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:19.546 nvme0n1 00:13:19.546 16:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:19.804 Running I/O for 1 seconds... 00:13:20.743 00:13:20.743 Latency(us) 00:13:20.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.743 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.743 Verification LBA range: start 0x0 length 0x2000 00:13:20.743 nvme0n1 : 1.03 3737.92 14.60 0.00 0.00 33824.53 8043.05 21328.99 00:13:20.743 =================================================================================================================== 00:13:20.743 Total : 3737.92 14.60 0.00 0.00 33824.53 8043.05 21328.99 00:13:20.743 0 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73422 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73422 ']' 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73422 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73422 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:20.743 killing process with pid 73422 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73422' 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73422 00:13:20.743 Received shutdown signal, test time was about 1.000000 seconds 00:13:20.743 00:13:20.743 Latency(us) 00:13:20.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.743 =================================================================================================================== 00:13:20.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:20.743 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73422 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73373 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73373 ']' 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73373 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73373 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.002 killing process with pid 73373 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73373' 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73373 00:13:21.002 [2024-07-12 16:17:04.620876] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:21.002 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73373 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73473 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73473 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73473 ']' 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.259 16:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.259 [2024-07-12 16:17:04.860998] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:21.259 [2024-07-12 16:17:04.861099] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.517 [2024-07-12 16:17:04.999464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.517 [2024-07-12 16:17:05.058806] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.517 [2024-07-12 16:17:05.058864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.517 [2024-07-12 16:17:05.058890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.517 [2024-07-12 16:17:05.058908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.517 [2024-07-12 16:17:05.058915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.517 [2024-07-12 16:17:05.058942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.517 [2024-07-12 16:17:05.087780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.083 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.083 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:22.083 16:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.083 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.083 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.366 [2024-07-12 16:17:05.819341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.366 malloc0 00:13:22.366 [2024-07-12 16:17:05.845777] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:22.366 [2024-07-12 16:17:05.845969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=73505 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 73505 /var/tmp/bdevperf.sock 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73505 ']' 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.366 16:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.366 [2024-07-12 16:17:05.919172] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:22.366 [2024-07-12 16:17:05.919251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73505 ] 00:13:22.366 [2024-07-12 16:17:06.053023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.636 [2024-07-12 16:17:06.122269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.636 [2024-07-12 16:17:06.154679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.636 16:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.636 16:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:22.636 16:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RMDll0bN69 00:13:22.893 16:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:23.152 [2024-07-12 16:17:06.670731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.152 nvme0n1 00:13:23.152 16:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:23.152 Running I/O for 1 seconds... 00:13:24.524 00:13:24.524 Latency(us) 00:13:24.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.524 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.524 Verification LBA range: start 0x0 length 0x2000 00:13:24.524 nvme0n1 : 1.03 3705.14 14.47 0.00 0.00 34101.99 9472.93 22401.40 00:13:24.524 =================================================================================================================== 00:13:24.524 Total : 3705.14 14.47 0.00 0.00 34101.99 9472.93 22401.40 00:13:24.524 0 00:13:24.524 16:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:13:24.524 16:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.524 16:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.524 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.524 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:13:24.524 "subsystems": [ 00:13:24.524 { 00:13:24.524 "subsystem": "keyring", 00:13:24.524 "config": [ 00:13:24.524 { 00:13:24.524 "method": "keyring_file_add_key", 00:13:24.524 "params": { 00:13:24.524 "name": "key0", 00:13:24.524 "path": "/tmp/tmp.RMDll0bN69" 00:13:24.524 } 00:13:24.524 } 00:13:24.524 ] 00:13:24.524 }, 00:13:24.524 { 00:13:24.524 "subsystem": "iobuf", 00:13:24.524 "config": [ 00:13:24.525 { 00:13:24.525 "method": "iobuf_set_options", 00:13:24.525 "params": { 00:13:24.525 "small_pool_count": 8192, 00:13:24.525 "large_pool_count": 1024, 00:13:24.525 "small_bufsize": 8192, 00:13:24.525 "large_bufsize": 135168 00:13:24.525 } 00:13:24.525 } 00:13:24.525 ] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "sock", 00:13:24.525 "config": [ 00:13:24.525 { 00:13:24.525 "method": "sock_set_default_impl", 00:13:24.525 "params": { 00:13:24.525 "impl_name": "uring" 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "sock_impl_set_options", 00:13:24.525 "params": { 00:13:24.525 "impl_name": "ssl", 00:13:24.525 "recv_buf_size": 4096, 00:13:24.525 "send_buf_size": 4096, 00:13:24.525 "enable_recv_pipe": true, 00:13:24.525 "enable_quickack": false, 00:13:24.525 "enable_placement_id": 0, 00:13:24.525 "enable_zerocopy_send_server": true, 00:13:24.525 "enable_zerocopy_send_client": false, 00:13:24.525 "zerocopy_threshold": 0, 00:13:24.525 "tls_version": 0, 00:13:24.525 "enable_ktls": false 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "sock_impl_set_options", 00:13:24.525 "params": { 00:13:24.525 "impl_name": "posix", 00:13:24.525 "recv_buf_size": 2097152, 00:13:24.525 "send_buf_size": 2097152, 00:13:24.525 "enable_recv_pipe": true, 00:13:24.525 "enable_quickack": false, 00:13:24.525 "enable_placement_id": 0, 00:13:24.525 "enable_zerocopy_send_server": true, 00:13:24.525 "enable_zerocopy_send_client": false, 00:13:24.525 "zerocopy_threshold": 0, 00:13:24.525 "tls_version": 0, 00:13:24.525 "enable_ktls": false 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "sock_impl_set_options", 00:13:24.525 "params": { 00:13:24.525 "impl_name": "uring", 00:13:24.525 "recv_buf_size": 2097152, 00:13:24.525 "send_buf_size": 2097152, 00:13:24.525 "enable_recv_pipe": true, 00:13:24.525 "enable_quickack": false, 00:13:24.525 "enable_placement_id": 0, 00:13:24.525 "enable_zerocopy_send_server": false, 00:13:24.525 "enable_zerocopy_send_client": false, 00:13:24.525 "zerocopy_threshold": 0, 00:13:24.525 "tls_version": 0, 00:13:24.525 "enable_ktls": false 00:13:24.525 } 00:13:24.525 } 00:13:24.525 ] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "vmd", 00:13:24.525 "config": [] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "accel", 00:13:24.525 "config": [ 00:13:24.525 { 00:13:24.525 "method": "accel_set_options", 00:13:24.525 "params": { 00:13:24.525 "small_cache_size": 128, 00:13:24.525 "large_cache_size": 16, 00:13:24.525 "task_count": 2048, 00:13:24.525 "sequence_count": 2048, 00:13:24.525 "buf_count": 2048 00:13:24.525 } 00:13:24.525 } 00:13:24.525 ] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "bdev", 00:13:24.525 "config": [ 00:13:24.525 { 00:13:24.525 "method": "bdev_set_options", 00:13:24.525 "params": { 00:13:24.525 "bdev_io_pool_size": 65535, 00:13:24.525 "bdev_io_cache_size": 256, 00:13:24.525 "bdev_auto_examine": true, 00:13:24.525 "iobuf_small_cache_size": 128, 00:13:24.525 "iobuf_large_cache_size": 16 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "bdev_raid_set_options", 00:13:24.525 "params": { 00:13:24.525 "process_window_size_kb": 1024 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "bdev_iscsi_set_options", 00:13:24.525 "params": { 00:13:24.525 "timeout_sec": 30 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "bdev_nvme_set_options", 00:13:24.525 "params": { 00:13:24.525 "action_on_timeout": "none", 00:13:24.525 "timeout_us": 0, 00:13:24.525 "timeout_admin_us": 0, 00:13:24.525 "keep_alive_timeout_ms": 10000, 00:13:24.525 "arbitration_burst": 0, 00:13:24.525 "low_priority_weight": 0, 00:13:24.525 "medium_priority_weight": 0, 00:13:24.525 "high_priority_weight": 0, 00:13:24.525 "nvme_adminq_poll_period_us": 10000, 00:13:24.525 "nvme_ioq_poll_period_us": 0, 00:13:24.525 "io_queue_requests": 0, 00:13:24.525 "delay_cmd_submit": true, 00:13:24.525 "transport_retry_count": 4, 00:13:24.525 "bdev_retry_count": 3, 00:13:24.525 "transport_ack_timeout": 0, 00:13:24.525 "ctrlr_loss_timeout_sec": 0, 00:13:24.525 "reconnect_delay_sec": 0, 00:13:24.525 "fast_io_fail_timeout_sec": 0, 00:13:24.525 "disable_auto_failback": false, 00:13:24.525 "generate_uuids": false, 00:13:24.525 "transport_tos": 0, 00:13:24.525 "nvme_error_stat": false, 00:13:24.525 "rdma_srq_size": 0, 00:13:24.525 "io_path_stat": false, 00:13:24.525 "allow_accel_sequence": false, 00:13:24.525 "rdma_max_cq_size": 0, 00:13:24.525 "rdma_cm_event_timeout_ms": 0, 00:13:24.525 "dhchap_digests": [ 00:13:24.525 "sha256", 00:13:24.525 "sha384", 00:13:24.525 "sha512" 00:13:24.525 ], 00:13:24.525 "dhchap_dhgroups": [ 00:13:24.525 "null", 00:13:24.525 "ffdhe2048", 00:13:24.525 "ffdhe3072", 00:13:24.525 "ffdhe4096", 00:13:24.525 "ffdhe6144", 00:13:24.525 "ffdhe8192" 00:13:24.525 ] 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "bdev_nvme_set_hotplug", 00:13:24.525 "params": { 00:13:24.525 "period_us": 100000, 00:13:24.525 "enable": false 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "bdev_malloc_create", 00:13:24.525 "params": { 00:13:24.525 "name": "malloc0", 00:13:24.525 "num_blocks": 8192, 00:13:24.525 "block_size": 4096, 00:13:24.525 "physical_block_size": 4096, 00:13:24.525 "uuid": "b2a0eb1c-6d59-42ac-9734-1142dde8c9e8", 00:13:24.525 "optimal_io_boundary": 0 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "bdev_wait_for_examine" 00:13:24.525 } 00:13:24.525 ] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "nbd", 00:13:24.525 "config": [] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "scheduler", 00:13:24.525 "config": [ 00:13:24.525 { 00:13:24.525 "method": "framework_set_scheduler", 00:13:24.525 "params": { 00:13:24.525 "name": "static" 00:13:24.525 } 00:13:24.525 } 00:13:24.525 ] 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "subsystem": "nvmf", 00:13:24.525 "config": [ 00:13:24.525 { 00:13:24.525 "method": "nvmf_set_config", 00:13:24.525 "params": { 00:13:24.525 "discovery_filter": "match_any", 00:13:24.525 "admin_cmd_passthru": { 00:13:24.525 "identify_ctrlr": false 00:13:24.525 } 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_set_max_subsystems", 00:13:24.525 "params": { 00:13:24.525 "max_subsystems": 1024 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_set_crdt", 00:13:24.525 "params": { 00:13:24.525 "crdt1": 0, 00:13:24.525 "crdt2": 0, 00:13:24.525 "crdt3": 0 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_create_transport", 00:13:24.525 "params": { 00:13:24.525 "trtype": "TCP", 00:13:24.525 "max_queue_depth": 128, 00:13:24.525 "max_io_qpairs_per_ctrlr": 127, 00:13:24.525 "in_capsule_data_size": 4096, 00:13:24.525 "max_io_size": 131072, 00:13:24.525 "io_unit_size": 131072, 00:13:24.525 "max_aq_depth": 128, 00:13:24.525 "num_shared_buffers": 511, 00:13:24.525 "buf_cache_size": 4294967295, 00:13:24.525 "dif_insert_or_strip": false, 00:13:24.525 "zcopy": false, 00:13:24.525 "c2h_success": false, 00:13:24.525 "sock_priority": 0, 00:13:24.525 "abort_timeout_sec": 1, 00:13:24.525 "ack_timeout": 0, 00:13:24.525 "data_wr_pool_size": 0 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_create_subsystem", 00:13:24.525 "params": { 00:13:24.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.525 "allow_any_host": false, 00:13:24.525 "serial_number": "00000000000000000000", 00:13:24.525 "model_number": "SPDK bdev Controller", 00:13:24.525 "max_namespaces": 32, 00:13:24.525 "min_cntlid": 1, 00:13:24.525 "max_cntlid": 65519, 00:13:24.525 "ana_reporting": false 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_subsystem_add_host", 00:13:24.525 "params": { 00:13:24.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.525 "host": "nqn.2016-06.io.spdk:host1", 00:13:24.525 "psk": "key0" 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_subsystem_add_ns", 00:13:24.525 "params": { 00:13:24.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.525 "namespace": { 00:13:24.525 "nsid": 1, 00:13:24.525 "bdev_name": "malloc0", 00:13:24.525 "nguid": "B2A0EB1C6D5942AC97341142DDE8C9E8", 00:13:24.525 "uuid": "b2a0eb1c-6d59-42ac-9734-1142dde8c9e8", 00:13:24.525 "no_auto_visible": false 00:13:24.525 } 00:13:24.525 } 00:13:24.525 }, 00:13:24.525 { 00:13:24.525 "method": "nvmf_subsystem_add_listener", 00:13:24.525 "params": { 00:13:24.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.526 "listen_address": { 00:13:24.526 "trtype": "TCP", 00:13:24.526 "adrfam": "IPv4", 00:13:24.526 "traddr": "10.0.0.2", 00:13:24.526 "trsvcid": "4420" 00:13:24.526 }, 00:13:24.526 "secure_channel": true 00:13:24.526 } 00:13:24.526 } 00:13:24.526 ] 00:13:24.526 } 00:13:24.526 ] 00:13:24.526 }' 00:13:24.526 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:24.787 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:13:24.787 "subsystems": [ 00:13:24.787 { 00:13:24.787 "subsystem": "keyring", 00:13:24.787 "config": [ 00:13:24.787 { 00:13:24.787 "method": "keyring_file_add_key", 00:13:24.787 "params": { 00:13:24.787 "name": "key0", 00:13:24.787 "path": "/tmp/tmp.RMDll0bN69" 00:13:24.787 } 00:13:24.787 } 00:13:24.787 ] 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "subsystem": "iobuf", 00:13:24.787 "config": [ 00:13:24.787 { 00:13:24.787 "method": "iobuf_set_options", 00:13:24.787 "params": { 00:13:24.787 "small_pool_count": 8192, 00:13:24.787 "large_pool_count": 1024, 00:13:24.787 "small_bufsize": 8192, 00:13:24.787 "large_bufsize": 135168 00:13:24.787 } 00:13:24.787 } 00:13:24.787 ] 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "subsystem": "sock", 00:13:24.787 "config": [ 00:13:24.787 { 00:13:24.787 "method": "sock_set_default_impl", 00:13:24.787 "params": { 00:13:24.787 "impl_name": "uring" 00:13:24.787 } 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "method": "sock_impl_set_options", 00:13:24.787 "params": { 00:13:24.787 "impl_name": "ssl", 00:13:24.787 "recv_buf_size": 4096, 00:13:24.787 "send_buf_size": 4096, 00:13:24.787 "enable_recv_pipe": true, 00:13:24.787 "enable_quickack": false, 00:13:24.787 "enable_placement_id": 0, 00:13:24.787 "enable_zerocopy_send_server": true, 00:13:24.787 "enable_zerocopy_send_client": false, 00:13:24.787 "zerocopy_threshold": 0, 00:13:24.787 "tls_version": 0, 00:13:24.787 "enable_ktls": false 00:13:24.787 } 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "method": "sock_impl_set_options", 00:13:24.787 "params": { 00:13:24.787 "impl_name": "posix", 00:13:24.787 "recv_buf_size": 2097152, 00:13:24.787 "send_buf_size": 2097152, 00:13:24.787 "enable_recv_pipe": true, 00:13:24.787 "enable_quickack": false, 00:13:24.787 "enable_placement_id": 0, 00:13:24.787 "enable_zerocopy_send_server": true, 00:13:24.787 "enable_zerocopy_send_client": false, 00:13:24.787 "zerocopy_threshold": 0, 00:13:24.787 "tls_version": 0, 00:13:24.787 "enable_ktls": false 00:13:24.787 } 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "method": "sock_impl_set_options", 00:13:24.787 "params": { 00:13:24.787 "impl_name": "uring", 00:13:24.787 "recv_buf_size": 2097152, 00:13:24.787 "send_buf_size": 2097152, 00:13:24.787 "enable_recv_pipe": true, 00:13:24.787 "enable_quickack": false, 00:13:24.787 "enable_placement_id": 0, 00:13:24.787 "enable_zerocopy_send_server": false, 00:13:24.787 "enable_zerocopy_send_client": false, 00:13:24.787 "zerocopy_threshold": 0, 00:13:24.787 "tls_version": 0, 00:13:24.787 "enable_ktls": false 00:13:24.787 } 00:13:24.787 } 00:13:24.787 ] 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "subsystem": "vmd", 00:13:24.787 "config": [] 00:13:24.787 }, 00:13:24.787 { 00:13:24.787 "subsystem": "accel", 00:13:24.787 "config": [ 00:13:24.787 { 00:13:24.787 "method": "accel_set_options", 00:13:24.787 "params": { 00:13:24.787 "small_cache_size": 128, 00:13:24.787 "large_cache_size": 16, 00:13:24.787 "task_count": 2048, 00:13:24.787 "sequence_count": 2048, 00:13:24.787 "buf_count": 2048 00:13:24.787 } 00:13:24.787 } 00:13:24.788 ] 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "subsystem": "bdev", 00:13:24.788 "config": [ 00:13:24.788 { 00:13:24.788 "method": "bdev_set_options", 00:13:24.788 "params": { 00:13:24.788 "bdev_io_pool_size": 65535, 00:13:24.788 "bdev_io_cache_size": 256, 00:13:24.788 "bdev_auto_examine": true, 00:13:24.788 "iobuf_small_cache_size": 128, 00:13:24.788 "iobuf_large_cache_size": 16 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_raid_set_options", 00:13:24.788 "params": { 00:13:24.788 "process_window_size_kb": 1024 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_iscsi_set_options", 00:13:24.788 "params": { 00:13:24.788 "timeout_sec": 30 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_nvme_set_options", 00:13:24.788 "params": { 00:13:24.788 "action_on_timeout": "none", 00:13:24.788 "timeout_us": 0, 00:13:24.788 "timeout_admin_us": 0, 00:13:24.788 "keep_alive_timeout_ms": 10000, 00:13:24.788 "arbitration_burst": 0, 00:13:24.788 "low_priority_weight": 0, 00:13:24.788 "medium_priority_weight": 0, 00:13:24.788 "high_priority_weight": 0, 00:13:24.788 "nvme_adminq_poll_period_us": 10000, 00:13:24.788 "nvme_ioq_poll_period_us": 0, 00:13:24.788 "io_queue_requests": 512, 00:13:24.788 "delay_cmd_submit": true, 00:13:24.788 "transport_retry_count": 4, 00:13:24.788 "bdev_retry_count": 3, 00:13:24.788 "transport_ack_timeout": 0, 00:13:24.788 "ctrlr_loss_timeout_sec": 0, 00:13:24.788 "reconnect_delay_sec": 0, 00:13:24.788 "fast_io_fail_timeout_sec": 0, 00:13:24.788 "disable_auto_failback": false, 00:13:24.788 "generate_uuids": false, 00:13:24.788 "transport_tos": 0, 00:13:24.788 "nvme_error_stat": false, 00:13:24.788 "rdma_srq_size": 0, 00:13:24.788 "io_path_stat": false, 00:13:24.788 "allow_accel_sequence": false, 00:13:24.788 "rdma_max_cq_size": 0, 00:13:24.788 "rdma_cm_event_timeout_ms": 0, 00:13:24.788 "dhchap_digests": [ 00:13:24.788 "sha256", 00:13:24.788 "sha384", 00:13:24.788 "sha512" 00:13:24.788 ], 00:13:24.788 "dhchap_dhgroups": [ 00:13:24.788 "null", 00:13:24.788 "ffdhe2048", 00:13:24.788 "ffdhe3072", 00:13:24.788 "ffdhe4096", 00:13:24.788 "ffdhe6144", 00:13:24.788 "ffdhe8192" 00:13:24.788 ] 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_nvme_attach_controller", 00:13:24.788 "params": { 00:13:24.788 "name": "nvme0", 00:13:24.788 "trtype": "TCP", 00:13:24.788 "adrfam": "IPv4", 00:13:24.788 "traddr": "10.0.0.2", 00:13:24.788 "trsvcid": "4420", 00:13:24.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.788 "prchk_reftag": false, 00:13:24.788 "prchk_guard": false, 00:13:24.788 "ctrlr_loss_timeout_sec": 0, 00:13:24.788 "reconnect_delay_sec": 0, 00:13:24.788 "fast_io_fail_timeout_sec": 0, 00:13:24.788 "psk": "key0", 00:13:24.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.788 "hdgst": false, 00:13:24.788 "ddgst": false 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_nvme_set_hotplug", 00:13:24.788 "params": { 00:13:24.788 "period_us": 100000, 00:13:24.788 "enable": false 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_enable_histogram", 00:13:24.788 "params": { 00:13:24.788 "name": "nvme0n1", 00:13:24.788 "enable": true 00:13:24.788 } 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "method": "bdev_wait_for_examine" 00:13:24.788 } 00:13:24.788 ] 00:13:24.788 }, 00:13:24.788 { 00:13:24.788 "subsystem": "nbd", 00:13:24.789 "config": [] 00:13:24.789 } 00:13:24.789 ] 00:13:24.789 }' 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 73505 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73505 ']' 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73505 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73505 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:24.789 killing process with pid 73505 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73505' 00:13:24.789 Received shutdown signal, test time was about 1.000000 seconds 00:13:24.789 00:13:24.789 Latency(us) 00:13:24.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.789 =================================================================================================================== 00:13:24.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73505 00:13:24.789 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73505 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 73473 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73473 ']' 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73473 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73473 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.050 killing process with pid 73473 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73473' 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73473 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73473 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.050 16:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:13:25.050 "subsystems": [ 00:13:25.050 { 00:13:25.050 "subsystem": "keyring", 00:13:25.050 "config": [ 00:13:25.050 { 00:13:25.050 "method": "keyring_file_add_key", 00:13:25.050 "params": { 00:13:25.050 "name": "key0", 00:13:25.050 "path": "/tmp/tmp.RMDll0bN69" 00:13:25.050 } 00:13:25.050 } 00:13:25.050 ] 00:13:25.050 }, 00:13:25.050 { 00:13:25.050 "subsystem": "iobuf", 00:13:25.050 "config": [ 00:13:25.050 { 00:13:25.050 "method": "iobuf_set_options", 00:13:25.050 "params": { 00:13:25.050 "small_pool_count": 8192, 00:13:25.050 "large_pool_count": 1024, 00:13:25.050 "small_bufsize": 8192, 00:13:25.050 "large_bufsize": 135168 00:13:25.050 } 00:13:25.050 } 00:13:25.050 ] 00:13:25.050 }, 00:13:25.050 { 00:13:25.050 "subsystem": "sock", 00:13:25.050 "config": [ 00:13:25.050 { 00:13:25.050 "method": "sock_set_default_impl", 00:13:25.050 "params": { 00:13:25.050 "impl_name": "uring" 00:13:25.050 } 00:13:25.050 }, 00:13:25.050 { 00:13:25.050 "method": "sock_impl_set_options", 00:13:25.050 "params": { 00:13:25.050 "impl_name": "ssl", 00:13:25.050 "recv_buf_size": 4096, 00:13:25.050 "send_buf_size": 4096, 00:13:25.051 "enable_recv_pipe": true, 00:13:25.051 "enable_quickack": false, 00:13:25.051 "enable_placement_id": 0, 00:13:25.051 "enable_zerocopy_send_server": true, 00:13:25.051 "enable_zerocopy_send_client": false, 00:13:25.051 "zerocopy_threshold": 0, 00:13:25.051 "tls_version": 0, 00:13:25.051 "enable_ktls": false 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "sock_impl_set_options", 00:13:25.051 "params": { 00:13:25.051 "impl_name": "posix", 00:13:25.051 "recv_buf_size": 2097152, 00:13:25.051 "send_buf_size": 2097152, 00:13:25.051 "enable_recv_pipe": true, 00:13:25.051 "enable_quickack": false, 00:13:25.051 "enable_placement_id": 0, 00:13:25.051 "enable_zerocopy_send_server": true, 00:13:25.051 "enable_zerocopy_send_client": false, 00:13:25.051 "zerocopy_threshold": 0, 00:13:25.051 "tls_version": 0, 00:13:25.051 "enable_ktls": false 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "sock_impl_set_options", 00:13:25.051 "params": { 00:13:25.051 "impl_name": "uring", 00:13:25.051 "recv_buf_size": 2097152, 00:13:25.051 "send_buf_size": 2097152, 00:13:25.051 "enable_recv_pipe": true, 00:13:25.051 "enable_quickack": false, 00:13:25.051 "enable_placement_id": 0, 00:13:25.051 "enable_zerocopy_send_server": false, 00:13:25.051 "enable_zerocopy_send_client": false, 00:13:25.051 "zerocopy_threshold": 0, 00:13:25.051 "tls_version": 0, 00:13:25.051 "enable_ktls": false 00:13:25.051 } 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "subsystem": "vmd", 00:13:25.051 "config": [] 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "subsystem": "accel", 00:13:25.051 "config": [ 00:13:25.051 { 00:13:25.051 "method": "accel_set_options", 00:13:25.051 "params": { 00:13:25.051 "small_cache_size": 128, 00:13:25.051 "large_cache_size": 16, 00:13:25.051 "task_count": 2048, 00:13:25.051 "sequence_count": 2048, 00:13:25.051 "buf_count": 2048 00:13:25.051 } 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "subsystem": "bdev", 00:13:25.051 "config": [ 00:13:25.051 { 00:13:25.051 "method": "bdev_set_options", 00:13:25.051 "params": { 00:13:25.051 "bdev_io_pool_size": 65535, 00:13:25.051 "bdev_io_cache_size": 256, 00:13:25.051 "bdev_auto_examine": true, 00:13:25.051 "iobuf_small_cache_size": 128, 00:13:25.051 "iobuf_large_cache_size": 16 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "bdev_raid_set_options", 00:13:25.051 "params": { 00:13:25.051 "process_window_size_kb": 1024 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "bdev_iscsi_set_options", 00:13:25.051 "params": { 00:13:25.051 "timeout_sec": 30 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "bdev_nvme_set_options", 00:13:25.051 "params": { 00:13:25.051 "action_on_timeout": "none", 00:13:25.051 "timeout_us": 0, 00:13:25.051 "timeout_admin_us": 0, 00:13:25.051 "keep_alive_timeout_ms": 10000, 00:13:25.051 "arbitration_burst": 0, 00:13:25.051 "low_priority_weight": 0, 00:13:25.051 "medium_priority_weight": 0, 00:13:25.051 "high_priority_weight": 0, 00:13:25.051 "nvme_adminq_poll_period_us": 10000, 00:13:25.051 "nvme_ioq_poll_period_us": 0, 00:13:25.051 "io_queue_requests": 0, 00:13:25.051 "delay_cmd_submit": true, 00:13:25.051 "transport_retry_count": 4, 00:13:25.051 "bdev_retry_count": 3, 00:13:25.051 "transport_ack_timeout": 0, 00:13:25.051 "ctrlr_loss_timeout_sec": 0, 00:13:25.051 "reconnect_delay_sec": 0, 00:13:25.051 "fast_io_fail_timeout_sec": 0, 00:13:25.051 "disable_auto_failback": false, 00:13:25.051 "generate_uuids": false, 00:13:25.051 "transport_tos": 0, 00:13:25.051 "nvme_error_stat": false, 00:13:25.051 "rdma_srq_size": 0, 00:13:25.051 "io_path_stat": false, 00:13:25.051 "allow_accel_sequence": false, 00:13:25.051 "rdma_max_cq_size": 0, 00:13:25.051 "rdma_cm_event_timeout_ms": 0, 00:13:25.051 "dhchap_digests": [ 00:13:25.051 "sha256", 00:13:25.051 "sha384", 00:13:25.051 "sha512" 00:13:25.051 ], 00:13:25.051 "dhchap_dhgroups": [ 00:13:25.051 "null", 00:13:25.051 "ffdhe2048", 00:13:25.051 "ffdhe3072", 00:13:25.051 "ffdhe4096", 00:13:25.051 "ffdhe6144", 00:13:25.051 "ffdhe8192" 00:13:25.051 ] 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "bdev_nvme_set_hotplug", 00:13:25.051 "params": { 00:13:25.051 "period_us": 100000, 00:13:25.051 "enable": false 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "bdev_malloc_create", 00:13:25.051 "params": { 00:13:25.051 "name": "malloc0", 00:13:25.051 "num_blocks": 8192, 00:13:25.051 "block_size": 4096, 00:13:25.051 "physical_block_size": 4096, 00:13:25.051 "uuid": "b2a0eb1c-6d59-42ac-9734-1142dde8c9e8", 00:13:25.051 "optimal_io_boundary": 0 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "bdev_wait_for_examine" 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "subsystem": "nbd", 00:13:25.051 "config": [] 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "subsystem": "scheduler", 00:13:25.051 "config": [ 00:13:25.051 { 00:13:25.051 "method": "framework_set_scheduler", 00:13:25.051 "params": { 00:13:25.051 "name": "static" 00:13:25.051 } 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "subsystem": "nvmf", 00:13:25.051 "config": [ 00:13:25.051 { 00:13:25.051 "method": "nvmf_set_config", 00:13:25.051 "params": { 00:13:25.051 "discovery_filter": "match_any", 00:13:25.051 "admin_cmd_passthru": { 00:13:25.051 "identify_ctrlr": false 00:13:25.051 } 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_set_max_subsystems", 00:13:25.051 "params": { 00:13:25.051 "max_subsystems": 1024 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_set_crdt", 00:13:25.051 "params": { 00:13:25.051 "crdt1": 0, 00:13:25.051 "crdt2": 0, 00:13:25.051 "crdt3": 0 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_create_transport", 00:13:25.051 "params": { 00:13:25.051 "trtype": "TCP", 00:13:25.051 "max_queue_depth": 128, 00:13:25.051 "max_io_qpairs_per_ctrlr": 127, 00:13:25.051 "in_capsule_data_size": 4096, 00:13:25.051 "max_io_size": 131072, 00:13:25.051 "io_unit_size": 131072, 00:13:25.051 "max_aq_depth": 128, 00:13:25.051 "num_shared_buffers": 511, 00:13:25.051 "buf_cache_size": 4294967295, 00:13:25.051 "dif_insert_or_strip": false, 00:13:25.051 "zcopy": false, 00:13:25.051 "c2h_success": false, 00:13:25.051 "sock_priority": 0, 00:13:25.051 "abort_timeout_sec": 1, 00:13:25.051 "ack_timeout": 0, 00:13:25.051 "data_wr_pool_size": 0 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_create_subsystem", 00:13:25.051 "params": { 00:13:25.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.051 "allow_any_host": false, 00:13:25.051 "serial_number": "00000000000000000000", 00:13:25.051 "model_number": "SPDK bdev Controller", 00:13:25.051 "max_namespaces": 32, 00:13:25.051 "min_cntlid": 1, 00:13:25.051 "max_cntlid": 65519, 00:13:25.051 "ana_reporting": false 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_subsystem_add_host", 00:13:25.051 "params": { 00:13:25.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.051 "host": "nqn.2016-06.io.spdk:host1", 00:13:25.051 "psk": "key0" 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_subsystem_add_ns", 00:13:25.051 "params": { 00:13:25.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.051 "namespace": { 00:13:25.051 "nsid": 1, 00:13:25.051 "bdev_name": "malloc0", 00:13:25.051 "nguid": "B2A0EB1C6D5942AC97341142DDE8C9E8", 00:13:25.051 "uuid": "b2a0eb1c-6d59-42ac-9734-1142dde8c9e8", 00:13:25.051 "no_auto_visible": false 00:13:25.051 } 00:13:25.051 } 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "method": "nvmf_subsystem_add_listener", 00:13:25.051 "params": { 00:13:25.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.051 "listen_address": { 00:13:25.051 "trtype": "TCP", 00:13:25.051 "adrfam": "IPv4", 00:13:25.051 "traddr": "10.0.0.2", 00:13:25.051 "trsvcid": "4420" 00:13:25.051 }, 00:13:25.051 "secure_channel": true 00:13:25.051 } 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 }' 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73553 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73553 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73553 ']' 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.051 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.052 16:17:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.052 [2024-07-12 16:17:08.771757] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:25.052 [2024-07-12 16:17:08.771849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.308 [2024-07-12 16:17:08.907370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.308 [2024-07-12 16:17:08.965089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.308 [2024-07-12 16:17:08.965140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.309 [2024-07-12 16:17:08.965152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.309 [2024-07-12 16:17:08.965161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.309 [2024-07-12 16:17:08.965168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.309 [2024-07-12 16:17:08.965246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.566 [2024-07-12 16:17:09.111998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.567 [2024-07-12 16:17:09.171453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.567 [2024-07-12 16:17:09.203343] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.567 [2024-07-12 16:17:09.203650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=73585 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 73585 /var/tmp/bdevperf.sock 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73585 ']' 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.133 16:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:13:26.133 "subsystems": [ 00:13:26.133 { 00:13:26.133 "subsystem": "keyring", 00:13:26.133 "config": [ 00:13:26.133 { 00:13:26.133 "method": "keyring_file_add_key", 00:13:26.133 "params": { 00:13:26.133 "name": "key0", 00:13:26.133 "path": "/tmp/tmp.RMDll0bN69" 00:13:26.133 } 00:13:26.133 } 00:13:26.133 ] 00:13:26.133 }, 00:13:26.133 { 00:13:26.133 "subsystem": "iobuf", 00:13:26.133 "config": [ 00:13:26.133 { 00:13:26.133 "method": "iobuf_set_options", 00:13:26.133 "params": { 00:13:26.133 "small_pool_count": 8192, 00:13:26.133 "large_pool_count": 1024, 00:13:26.133 "small_bufsize": 8192, 00:13:26.133 "large_bufsize": 135168 00:13:26.133 } 00:13:26.133 } 00:13:26.133 ] 00:13:26.133 }, 00:13:26.133 { 00:13:26.133 "subsystem": "sock", 00:13:26.133 "config": [ 00:13:26.133 { 00:13:26.133 "method": "sock_set_default_impl", 00:13:26.133 "params": { 00:13:26.133 "impl_name": "uring" 00:13:26.133 } 00:13:26.133 }, 00:13:26.133 { 00:13:26.133 "method": "sock_impl_set_options", 00:13:26.133 "params": { 00:13:26.133 "impl_name": "ssl", 00:13:26.133 "recv_buf_size": 4096, 00:13:26.133 "send_buf_size": 4096, 00:13:26.133 "enable_recv_pipe": true, 00:13:26.133 "enable_quickack": false, 00:13:26.133 "enable_placement_id": 0, 00:13:26.133 "enable_zerocopy_send_server": true, 00:13:26.133 "enable_zerocopy_send_client": false, 00:13:26.133 "zerocopy_threshold": 0, 00:13:26.133 "tls_version": 0, 00:13:26.133 "enable_ktls": false 00:13:26.133 } 00:13:26.133 }, 00:13:26.133 { 00:13:26.133 "method": "sock_impl_set_options", 00:13:26.133 "params": { 00:13:26.133 "impl_name": "posix", 00:13:26.134 "recv_buf_size": 2097152, 00:13:26.134 "send_buf_size": 2097152, 00:13:26.134 "enable_recv_pipe": true, 00:13:26.134 "enable_quickack": false, 00:13:26.134 "enable_placement_id": 0, 00:13:26.134 "enable_zerocopy_send_server": true, 00:13:26.134 "enable_zerocopy_send_client": false, 00:13:26.134 "zerocopy_threshold": 0, 00:13:26.134 "tls_version": 0, 00:13:26.134 "enable_ktls": false 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "sock_impl_set_options", 00:13:26.134 "params": { 00:13:26.134 "impl_name": "uring", 00:13:26.134 "recv_buf_size": 2097152, 00:13:26.134 "send_buf_size": 2097152, 00:13:26.134 "enable_recv_pipe": true, 00:13:26.134 "enable_quickack": false, 00:13:26.134 "enable_placement_id": 0, 00:13:26.134 "enable_zerocopy_send_server": false, 00:13:26.134 "enable_zerocopy_send_client": false, 00:13:26.134 "zerocopy_threshold": 0, 00:13:26.134 "tls_version": 0, 00:13:26.134 "enable_ktls": false 00:13:26.134 } 00:13:26.134 } 00:13:26.134 ] 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "subsystem": "vmd", 00:13:26.134 "config": [] 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "subsystem": "accel", 00:13:26.134 "config": [ 00:13:26.134 { 00:13:26.134 "method": "accel_set_options", 00:13:26.134 "params": { 00:13:26.134 "small_cache_size": 128, 00:13:26.134 "large_cache_size": 16, 00:13:26.134 "task_count": 2048, 00:13:26.134 "sequence_count": 2048, 00:13:26.134 "buf_count": 2048 00:13:26.134 } 00:13:26.134 } 00:13:26.134 ] 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "subsystem": "bdev", 00:13:26.134 "config": [ 00:13:26.134 { 00:13:26.134 "method": "bdev_set_options", 00:13:26.134 "params": { 00:13:26.134 "bdev_io_pool_size": 65535, 00:13:26.134 "bdev_io_cache_size": 256, 00:13:26.134 "bdev_auto_examine": true, 00:13:26.134 "iobuf_small_cache_size": 128, 00:13:26.134 "iobuf_large_cache_size": 16 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_raid_set_options", 00:13:26.134 "params": { 00:13:26.134 "process_window_size_kb": 1024 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_iscsi_set_options", 00:13:26.134 "params": { 00:13:26.134 "timeout_sec": 30 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_nvme_set_options", 00:13:26.134 "params": { 00:13:26.134 "action_on_timeout": "none", 00:13:26.134 "timeout_us": 0, 00:13:26.134 "timeout_admin_us": 0, 00:13:26.134 "keep_alive_timeout_ms": 10000, 00:13:26.134 "arbitration_burst": 0, 00:13:26.134 "low_priority_weight": 0, 00:13:26.134 "medium_priority_weight": 0, 00:13:26.134 "high_priority_weight": 0, 00:13:26.134 "nvme_adminq_poll_period_us": 10000, 00:13:26.134 "nvme_ioq_poll_period_us": 0, 00:13:26.134 "io_queue_requests": 512, 00:13:26.134 "delay_cmd_submit": true, 00:13:26.134 "transport_retry_count": 4, 00:13:26.134 "bdev_retry_count": 3, 00:13:26.134 "transport_ack_timeout": 0, 00:13:26.134 "ctrlr_loss_timeout_se 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.134 c": 0, 00:13:26.134 "reconnect_delay_sec": 0, 00:13:26.134 "fast_io_fail_timeout_sec": 0, 00:13:26.134 "disable_auto_failback": false, 00:13:26.134 "generate_uuids": false, 00:13:26.134 "transport_tos": 0, 00:13:26.134 "nvme_error_stat": false, 00:13:26.134 "rdma_srq_size": 0, 00:13:26.134 "io_path_stat": false, 00:13:26.134 "allow_accel_sequence": false, 00:13:26.134 "rdma_max_cq_size": 0, 00:13:26.134 "rdma_cm_event_timeout_ms": 0, 00:13:26.134 "dhchap_digests": [ 00:13:26.134 "sha256", 00:13:26.134 "sha384", 00:13:26.134 "sha512" 00:13:26.134 ], 00:13:26.134 "dhchap_dhgroups": [ 00:13:26.134 "null", 00:13:26.134 "ffdhe2048", 00:13:26.134 "ffdhe3072", 00:13:26.134 "ffdhe4096", 00:13:26.134 "ffdhe6144", 00:13:26.134 "ffdhe8192" 00:13:26.134 ] 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_nvme_attach_controller", 00:13:26.134 "params": { 00:13:26.134 "name": "nvme0", 00:13:26.134 "trtype": "TCP", 00:13:26.134 "adrfam": "IPv4", 00:13:26.134 "traddr": "10.0.0.2", 00:13:26.134 "trsvcid": "4420", 00:13:26.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.134 "prchk_reftag": false, 00:13:26.134 "prchk_guard": false, 00:13:26.134 "ctrlr_loss_timeout_sec": 0, 00:13:26.134 "reconnect_delay_sec": 0, 00:13:26.134 "fast_io_fail_timeout_sec": 0, 00:13:26.134 "psk": "key0", 00:13:26.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:26.134 "hdgst": false, 00:13:26.134 "ddgst": false 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_nvme_set_hotplug", 00:13:26.134 "params": { 00:13:26.134 "period_us": 100000, 00:13:26.134 "enable": false 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_enable_histogram", 00:13:26.134 "params": { 00:13:26.134 "name": "nvme0n1", 00:13:26.134 "enable": true 00:13:26.134 } 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "method": "bdev_wait_for_examine" 00:13:26.134 } 00:13:26.134 ] 00:13:26.134 }, 00:13:26.134 { 00:13:26.134 "subsystem": "nbd", 00:13:26.134 "config": [] 00:13:26.134 } 00:13:26.134 ] 00:13:26.134 }' 00:13:26.134 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.134 16:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.134 [2024-07-12 16:17:09.852606] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:26.134 [2024-07-12 16:17:09.852744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73585 ] 00:13:26.393 [2024-07-12 16:17:09.987950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.393 [2024-07-12 16:17:10.075299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.652 [2024-07-12 16:17:10.191976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:26.652 [2024-07-12 16:17:10.226273] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.219 16:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.219 16:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:27.219 16:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:27.219 16:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:13:27.478 16:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.478 16:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:27.478 Running I/O for 1 seconds... 00:13:28.853 00:13:28.853 Latency(us) 00:13:28.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.853 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.853 Verification LBA range: start 0x0 length 0x2000 00:13:28.853 nvme0n1 : 1.03 3703.59 14.47 0.00 0.00 34088.19 10485.76 26571.87 00:13:28.853 =================================================================================================================== 00:13:28.853 Total : 3703.59 14.47 0.00 0.00 34088.19 10485.76 26571.87 00:13:28.853 0 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:28.853 nvmf_trace.0 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 73585 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73585 ']' 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73585 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.853 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73585 00:13:28.853 killing process with pid 73585 00:13:28.853 Received shutdown signal, test time was about 1.000000 seconds 00:13:28.853 00:13:28.853 Latency(us) 00:13:28.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.854 =================================================================================================================== 00:13:28.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73585' 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73585 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73585 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.854 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.854 rmmod nvme_tcp 00:13:28.854 rmmod nvme_fabrics 00:13:29.113 rmmod nvme_keyring 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73553 ']' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73553 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73553 ']' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73553 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73553 00:13:29.113 killing process with pid 73553 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73553' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73553 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73553 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.113 16:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:29.371 16:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.AyhhWNJxxq /tmp/tmp.7ct4xxoldp /tmp/tmp.RMDll0bN69 00:13:29.371 ************************************ 00:13:29.371 END TEST nvmf_tls 00:13:29.371 ************************************ 00:13:29.371 00:13:29.371 real 1m20.662s 00:13:29.371 user 2m7.110s 00:13:29.371 sys 0m26.168s 00:13:29.371 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.371 16:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.371 16:17:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:29.371 16:17:12 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:29.371 16:17:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:29.371 16:17:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.371 16:17:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:29.371 ************************************ 00:13:29.371 START TEST nvmf_fips 00:13:29.371 ************************************ 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:29.371 * Looking for test storage... 00:13:29.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.371 16:17:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:29.371 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:13:29.372 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:13:29.630 Error setting digest 00:13:29.630 00F2308E357F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:13:29.630 00F2308E357F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.630 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:29.631 Cannot find device "nvmf_tgt_br" 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.631 Cannot find device "nvmf_tgt_br2" 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:29.631 Cannot find device "nvmf_tgt_br" 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:29.631 Cannot find device "nvmf_tgt_br2" 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.631 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:29.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:29.890 00:13:29.890 --- 10.0.0.2 ping statistics --- 00:13:29.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.890 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:29.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:29.890 00:13:29.890 --- 10.0.0.3 ping statistics --- 00:13:29.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.890 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:29.890 00:13:29.890 --- 10.0.0.1 ping statistics --- 00:13:29.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.890 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73851 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73851 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 73851 ']' 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.890 16:17:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:29.890 [2024-07-12 16:17:13.604230] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:29.890 [2024-07-12 16:17:13.604322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.149 [2024-07-12 16:17:13.741522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.149 [2024-07-12 16:17:13.800943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.149 [2024-07-12 16:17:13.801017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.149 [2024-07-12 16:17:13.801046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.149 [2024-07-12 16:17:13.801054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.149 [2024-07-12 16:17:13.801061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.149 [2024-07-12 16:17:13.801089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.149 [2024-07-12 16:17:13.831947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:31.086 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.345 [2024-07-12 16:17:14.829008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.345 [2024-07-12 16:17:14.844957] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:31.345 [2024-07-12 16:17:14.845158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.345 [2024-07-12 16:17:14.871928] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:31.345 malloc0 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73889 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73889 /var/tmp/bdevperf.sock 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 73889 ']' 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.345 16:17:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:31.345 [2024-07-12 16:17:14.978981] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:31.345 [2024-07-12 16:17:14.979080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73889 ] 00:13:31.604 [2024-07-12 16:17:15.119682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.604 [2024-07-12 16:17:15.191495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.604 [2024-07-12 16:17:15.226579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:32.540 16:17:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.540 16:17:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:13:32.540 16:17:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:32.540 [2024-07-12 16:17:16.204641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.540 [2024-07-12 16:17:16.204776] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:32.799 TLSTESTn1 00:13:32.799 16:17:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.799 Running I/O for 10 seconds... 00:13:42.795 00:13:42.795 Latency(us) 00:13:42.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.795 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:42.795 Verification LBA range: start 0x0 length 0x2000 00:13:42.795 TLSTESTn1 : 10.02 3892.10 15.20 0.00 0.00 32820.55 7864.32 30980.65 00:13:42.795 =================================================================================================================== 00:13:42.795 Total : 3892.10 15.20 0.00 0.00 32820.55 7864.32 30980.65 00:13:42.795 0 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:42.795 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:42.795 nvmf_trace.0 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73889 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 73889 ']' 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 73889 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73889 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:43.053 killing process with pid 73889 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73889' 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 73889 00:13:43.053 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.053 00:13:43.053 Latency(us) 00:13:43.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.053 =================================================================================================================== 00:13:43.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.053 [2024-07-12 16:17:26.565773] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 73889 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.053 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.312 rmmod nvme_tcp 00:13:43.312 rmmod nvme_fabrics 00:13:43.312 rmmod nvme_keyring 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73851 ']' 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73851 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 73851 ']' 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 73851 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73851 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:43.312 killing process with pid 73851 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73851' 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 73851 00:13:43.312 [2024-07-12 16:17:26.872862] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:43.312 16:17:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 73851 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:43.571 ************************************ 00:13:43.571 END TEST nvmf_fips 00:13:43.571 ************************************ 00:13:43.571 00:13:43.571 real 0m14.194s 00:13:43.571 user 0m19.575s 00:13:43.571 sys 0m5.570s 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.571 16:17:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.571 16:17:27 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:13:43.571 16:17:27 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:13:43.571 16:17:27 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 16:17:27 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 16:17:27 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:43.571 16:17:27 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.571 16:17:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.571 ************************************ 00:13:43.571 START TEST nvmf_identify 00:13:43.571 ************************************ 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:43.571 * Looking for test storage... 00:13:43.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:43.571 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:43.830 Cannot find device "nvmf_tgt_br" 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.830 Cannot find device "nvmf_tgt_br2" 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:43.830 Cannot find device "nvmf_tgt_br" 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:43.830 Cannot find device "nvmf_tgt_br2" 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.830 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:44.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:44.089 00:13:44.089 --- 10.0.0.2 ping statistics --- 00:13:44.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.089 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:44.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:44.089 00:13:44.089 --- 10.0.0.3 ping statistics --- 00:13:44.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.089 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:44.089 00:13:44.089 --- 10.0.0.1 ping statistics --- 00:13:44.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.089 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74229 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74229 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74229 ']' 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.089 16:17:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.089 [2024-07-12 16:17:27.725349] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:44.089 [2024-07-12 16:17:27.725438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.348 [2024-07-12 16:17:27.865115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.348 [2024-07-12 16:17:27.942436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.348 [2024-07-12 16:17:27.942507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.348 [2024-07-12 16:17:27.942521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.348 [2024-07-12 16:17:27.942531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.348 [2024-07-12 16:17:27.942540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.348 [2024-07-12 16:17:27.942925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.348 [2024-07-12 16:17:27.943200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.348 [2024-07-12 16:17:27.943334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.348 [2024-07-12 16:17:27.943340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.348 [2024-07-12 16:17:27.977051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.348 [2024-07-12 16:17:28.034171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.348 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 Malloc0 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 [2024-07-12 16:17:28.136961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 [ 00:13:44.606 { 00:13:44.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.606 "subtype": "Discovery", 00:13:44.606 "listen_addresses": [ 00:13:44.606 { 00:13:44.606 "trtype": "TCP", 00:13:44.606 "adrfam": "IPv4", 00:13:44.606 "traddr": "10.0.0.2", 00:13:44.606 "trsvcid": "4420" 00:13:44.606 } 00:13:44.606 ], 00:13:44.606 "allow_any_host": true, 00:13:44.606 "hosts": [] 00:13:44.606 }, 00:13:44.606 { 00:13:44.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.606 "subtype": "NVMe", 00:13:44.606 "listen_addresses": [ 00:13:44.606 { 00:13:44.606 "trtype": "TCP", 00:13:44.606 "adrfam": "IPv4", 00:13:44.606 "traddr": "10.0.0.2", 00:13:44.606 "trsvcid": "4420" 00:13:44.606 } 00:13:44.606 ], 00:13:44.606 "allow_any_host": true, 00:13:44.606 "hosts": [], 00:13:44.606 "serial_number": "SPDK00000000000001", 00:13:44.606 "model_number": "SPDK bdev Controller", 00:13:44.606 "max_namespaces": 32, 00:13:44.606 "min_cntlid": 1, 00:13:44.606 "max_cntlid": 65519, 00:13:44.606 "namespaces": [ 00:13:44.606 { 00:13:44.606 "nsid": 1, 00:13:44.606 "bdev_name": "Malloc0", 00:13:44.606 "name": "Malloc0", 00:13:44.606 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:44.606 "eui64": "ABCDEF0123456789", 00:13:44.606 "uuid": "8025eefa-05af-4400-9c10-bd2eceb43c7b" 00:13:44.606 } 00:13:44.606 ] 00:13:44.606 } 00:13:44.606 ] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.606 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:44.606 [2024-07-12 16:17:28.190062] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:44.607 [2024-07-12 16:17:28.190112] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74257 ] 00:13:44.871 [2024-07-12 16:17:28.334930] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:44.871 [2024-07-12 16:17:28.335008] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:44.871 [2024-07-12 16:17:28.335016] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:44.872 [2024-07-12 16:17:28.335029] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:44.872 [2024-07-12 16:17:28.335037] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:44.872 [2024-07-12 16:17:28.335182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:44.872 [2024-07-12 16:17:28.335250] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2172a60 0 00:13:44.872 [2024-07-12 16:17:28.341934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:44.872 [2024-07-12 16:17:28.341956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:44.872 [2024-07-12 16:17:28.341963] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:44.872 [2024-07-12 16:17:28.341967] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:44.872 [2024-07-12 16:17:28.342012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.342020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.342028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.342049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:44.872 [2024-07-12 16:17:28.342096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.349915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.349937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.349943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.349949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.349966] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:44.872 [2024-07-12 16:17:28.349975] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:44.872 [2024-07-12 16:17:28.349982] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:44.872 [2024-07-12 16:17:28.350002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.350023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.350054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.350152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.350159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.350163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.350173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:44.872 [2024-07-12 16:17:28.350181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:44.872 [2024-07-12 16:17:28.350191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.350233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.350259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.350309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.350317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.350321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.350331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:44.872 [2024-07-12 16:17:28.350341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.872 [2024-07-12 16:17:28.350349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.350366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.350385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.350664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.350677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.350682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.350692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.872 [2024-07-12 16:17:28.350704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.350737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.350756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.350958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.350970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.350975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.350980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.350985] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:44.872 [2024-07-12 16:17:28.350991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:44.872 [2024-07-12 16:17:28.351000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.872 [2024-07-12 16:17:28.351107] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:44.872 [2024-07-12 16:17:28.351113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.872 [2024-07-12 16:17:28.351124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.351140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.351162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.351220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.351230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.351234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.351245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.872 [2024-07-12 16:17:28.351256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.351274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.351295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.351344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.351351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.351355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.351365] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.872 [2024-07-12 16:17:28.351370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:44.872 [2024-07-12 16:17:28.351379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:44.872 [2024-07-12 16:17:28.351390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.872 [2024-07-12 16:17:28.351403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.872 [2024-07-12 16:17:28.351416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.872 [2024-07-12 16:17:28.351436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.872 [2024-07-12 16:17:28.351534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.872 [2024-07-12 16:17:28.351542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.872 [2024-07-12 16:17:28.351546] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351550] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2172a60): datao=0, datal=4096, cccid=0 00:13:44.872 [2024-07-12 16:17:28.351556] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b5840) on tqpair(0x2172a60): expected_datao=0, payload_size=4096 00:13:44.872 [2024-07-12 16:17:28.351561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351569] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351574] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.872 [2024-07-12 16:17:28.351604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.872 [2024-07-12 16:17:28.351608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.872 [2024-07-12 16:17:28.351612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.872 [2024-07-12 16:17:28.351621] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:44.873 [2024-07-12 16:17:28.351627] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:44.873 [2024-07-12 16:17:28.351632] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:44.873 [2024-07-12 16:17:28.351637] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:44.873 [2024-07-12 16:17:28.351642] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:44.873 [2024-07-12 16:17:28.351648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:44.873 [2024-07-12 16:17:28.351672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.873 [2024-07-12 16:17:28.351680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.351696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.873 [2024-07-12 16:17:28.351731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.873 [2024-07-12 16:17:28.351791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.873 [2024-07-12 16:17:28.351798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.873 [2024-07-12 16:17:28.351802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.873 [2024-07-12 16:17:28.351831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.351849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.873 [2024-07-12 16:17:28.351856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.351870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.873 [2024-07-12 16:17:28.351877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.351892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.873 [2024-07-12 16:17:28.351899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.351913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.873 [2024-07-12 16:17:28.351919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.873 [2024-07-12 16:17:28.351948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.873 [2024-07-12 16:17:28.351958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.351963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.351971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.873 [2024-07-12 16:17:28.351994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5840, cid 0, qid 0 00:13:44.873 [2024-07-12 16:17:28.352003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b59c0, cid 1, qid 0 00:13:44.873 [2024-07-12 16:17:28.352008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5b40, cid 2, qid 0 00:13:44.873 [2024-07-12 16:17:28.352013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.873 [2024-07-12 16:17:28.352018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5e40, cid 4, qid 0 00:13:44.873 [2024-07-12 16:17:28.352112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.873 [2024-07-12 16:17:28.352119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.873 [2024-07-12 16:17:28.352123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5e40) on tqpair=0x2172a60 00:13:44.873 [2024-07-12 16:17:28.352134] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:44.873 [2024-07-12 16:17:28.352144] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:44.873 [2024-07-12 16:17:28.352157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.352169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.873 [2024-07-12 16:17:28.352190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5e40, cid 4, qid 0 00:13:44.873 [2024-07-12 16:17:28.352278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.873 [2024-07-12 16:17:28.352288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.873 [2024-07-12 16:17:28.352292] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352296] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2172a60): datao=0, datal=4096, cccid=4 00:13:44.873 [2024-07-12 16:17:28.352301] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b5e40) on tqpair(0x2172a60): expected_datao=0, payload_size=4096 00:13:44.873 [2024-07-12 16:17:28.352306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352314] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352318] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.873 [2024-07-12 16:17:28.352333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.873 [2024-07-12 16:17:28.352337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5e40) on tqpair=0x2172a60 00:13:44.873 [2024-07-12 16:17:28.352357] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:44.873 [2024-07-12 16:17:28.352386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.352400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.873 [2024-07-12 16:17:28.352408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.352423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.873 [2024-07-12 16:17:28.352449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5e40, cid 4, qid 0 00:13:44.873 [2024-07-12 16:17:28.352457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5fc0, cid 5, qid 0 00:13:44.873 [2024-07-12 16:17:28.352564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.873 [2024-07-12 16:17:28.352572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.873 [2024-07-12 16:17:28.352576] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352580] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2172a60): datao=0, datal=1024, cccid=4 00:13:44.873 [2024-07-12 16:17:28.352585] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b5e40) on tqpair(0x2172a60): expected_datao=0, payload_size=1024 00:13:44.873 [2024-07-12 16:17:28.352590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352597] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352601] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.873 [2024-07-12 16:17:28.352614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.873 [2024-07-12 16:17:28.352618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5fc0) on tqpair=0x2172a60 00:13:44.873 [2024-07-12 16:17:28.352640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.873 [2024-07-12 16:17:28.352648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.873 [2024-07-12 16:17:28.352652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5e40) on tqpair=0x2172a60 00:13:44.873 [2024-07-12 16:17:28.352669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.352682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.873 [2024-07-12 16:17:28.352707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5e40, cid 4, qid 0 00:13:44.873 [2024-07-12 16:17:28.352786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.873 [2024-07-12 16:17:28.352793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.873 [2024-07-12 16:17:28.352797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352801] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2172a60): datao=0, datal=3072, cccid=4 00:13:44.873 [2024-07-12 16:17:28.352806] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b5e40) on tqpair(0x2172a60): expected_datao=0, payload_size=3072 00:13:44.873 [2024-07-12 16:17:28.352811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352819] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352823] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.873 [2024-07-12 16:17:28.352838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.873 [2024-07-12 16:17:28.352842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5e40) on tqpair=0x2172a60 00:13:44.873 [2024-07-12 16:17:28.352857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.873 [2024-07-12 16:17:28.352862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2172a60) 00:13:44.873 [2024-07-12 16:17:28.352882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.873 [2024-07-12 16:17:28.352909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5e40, cid 4, qid 0 00:13:44.873 [2024-07-12 16:17:28.352982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.874 [2024-07-12 16:17:28.352990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.874 [2024-07-12 16:17:28.352994] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.874 [2024-07-12 16:17:28.352998] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2172a60): datao=0, datal=8, cccid=4 00:13:44.874 [2024-07-12 16:17:28.353003] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b5e40) on tqpair(0x2172a60): expected_datao=0, payload_size=8 00:13:44.874 [2024-07-12 16:17:28.353008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.874 [2024-07-12 16:17:28.353015] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.874 [2024-07-12 16:17:28.353019] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.874 [2024-07-12 16:17:28.353034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.874 [2024-07-12 16:17:28.353042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.874 [2024-07-12 16:17:28.353046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.874 [2024-07-12 16:17:28.353051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5e40) on tqpair=0x2172a60 00:13:44.874 ===================================================== 00:13:44.874 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:44.874 ===================================================== 00:13:44.874 Controller Capabilities/Features 00:13:44.874 ================================ 00:13:44.874 Vendor ID: 0000 00:13:44.874 Subsystem Vendor ID: 0000 00:13:44.874 Serial Number: .................... 00:13:44.874 Model Number: ........................................ 00:13:44.874 Firmware Version: 24.09 00:13:44.874 Recommended Arb Burst: 0 00:13:44.874 IEEE OUI Identifier: 00 00 00 00:13:44.874 Multi-path I/O 00:13:44.874 May have multiple subsystem ports: No 00:13:44.874 May have multiple controllers: No 00:13:44.874 Associated with SR-IOV VF: No 00:13:44.874 Max Data Transfer Size: 131072 00:13:44.874 Max Number of Namespaces: 0 00:13:44.874 Max Number of I/O Queues: 1024 00:13:44.874 NVMe Specification Version (VS): 1.3 00:13:44.874 NVMe Specification Version (Identify): 1.3 00:13:44.874 Maximum Queue Entries: 128 00:13:44.874 Contiguous Queues Required: Yes 00:13:44.874 Arbitration Mechanisms Supported 00:13:44.874 Weighted Round Robin: Not Supported 00:13:44.874 Vendor Specific: Not Supported 00:13:44.874 Reset Timeout: 15000 ms 00:13:44.874 Doorbell Stride: 4 bytes 00:13:44.874 NVM Subsystem Reset: Not Supported 00:13:44.874 Command Sets Supported 00:13:44.874 NVM Command Set: Supported 00:13:44.874 Boot Partition: Not Supported 00:13:44.874 Memory Page Size Minimum: 4096 bytes 00:13:44.874 Memory Page Size Maximum: 4096 bytes 00:13:44.874 Persistent Memory Region: Not Supported 00:13:44.874 Optional Asynchronous Events Supported 00:13:44.874 Namespace Attribute Notices: Not Supported 00:13:44.874 Firmware Activation Notices: Not Supported 00:13:44.874 ANA Change Notices: Not Supported 00:13:44.874 PLE Aggregate Log Change Notices: Not Supported 00:13:44.874 LBA Status Info Alert Notices: Not Supported 00:13:44.874 EGE Aggregate Log Change Notices: Not Supported 00:13:44.874 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.874 Zone Descriptor Change Notices: Not Supported 00:13:44.874 Discovery Log Change Notices: Supported 00:13:44.874 Controller Attributes 00:13:44.874 128-bit Host Identifier: Not Supported 00:13:44.874 Non-Operational Permissive Mode: Not Supported 00:13:44.874 NVM Sets: Not Supported 00:13:44.874 Read Recovery Levels: Not Supported 00:13:44.874 Endurance Groups: Not Supported 00:13:44.874 Predictable Latency Mode: Not Supported 00:13:44.874 Traffic Based Keep ALive: Not Supported 00:13:44.874 Namespace Granularity: Not Supported 00:13:44.874 SQ Associations: Not Supported 00:13:44.874 UUID List: Not Supported 00:13:44.874 Multi-Domain Subsystem: Not Supported 00:13:44.874 Fixed Capacity Management: Not Supported 00:13:44.874 Variable Capacity Management: Not Supported 00:13:44.874 Delete Endurance Group: Not Supported 00:13:44.874 Delete NVM Set: Not Supported 00:13:44.874 Extended LBA Formats Supported: Not Supported 00:13:44.874 Flexible Data Placement Supported: Not Supported 00:13:44.874 00:13:44.874 Controller Memory Buffer Support 00:13:44.874 ================================ 00:13:44.874 Supported: No 00:13:44.874 00:13:44.874 Persistent Memory Region Support 00:13:44.874 ================================ 00:13:44.874 Supported: No 00:13:44.874 00:13:44.874 Admin Command Set Attributes 00:13:44.874 ============================ 00:13:44.874 Security Send/Receive: Not Supported 00:13:44.874 Format NVM: Not Supported 00:13:44.874 Firmware Activate/Download: Not Supported 00:13:44.874 Namespace Management: Not Supported 00:13:44.874 Device Self-Test: Not Supported 00:13:44.874 Directives: Not Supported 00:13:44.874 NVMe-MI: Not Supported 00:13:44.874 Virtualization Management: Not Supported 00:13:44.874 Doorbell Buffer Config: Not Supported 00:13:44.874 Get LBA Status Capability: Not Supported 00:13:44.874 Command & Feature Lockdown Capability: Not Supported 00:13:44.874 Abort Command Limit: 1 00:13:44.874 Async Event Request Limit: 4 00:13:44.874 Number of Firmware Slots: N/A 00:13:44.874 Firmware Slot 1 Read-Only: N/A 00:13:44.874 Firmware Activation Without Reset: N/A 00:13:44.874 Multiple Update Detection Support: N/A 00:13:44.874 Firmware Update Granularity: No Information Provided 00:13:44.874 Per-Namespace SMART Log: No 00:13:44.874 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.874 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:44.874 Command Effects Log Page: Not Supported 00:13:44.874 Get Log Page Extended Data: Supported 00:13:44.874 Telemetry Log Pages: Not Supported 00:13:44.874 Persistent Event Log Pages: Not Supported 00:13:44.874 Supported Log Pages Log Page: May Support 00:13:44.874 Commands Supported & Effects Log Page: Not Supported 00:13:44.874 Feature Identifiers & Effects Log Page:May Support 00:13:44.874 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.874 Data Area 4 for Telemetry Log: Not Supported 00:13:44.874 Error Log Page Entries Supported: 128 00:13:44.874 Keep Alive: Not Supported 00:13:44.874 00:13:44.874 NVM Command Set Attributes 00:13:44.874 ========================== 00:13:44.874 Submission Queue Entry Size 00:13:44.874 Max: 1 00:13:44.874 Min: 1 00:13:44.874 Completion Queue Entry Size 00:13:44.874 Max: 1 00:13:44.874 Min: 1 00:13:44.874 Number of Namespaces: 0 00:13:44.874 Compare Command: Not Supported 00:13:44.874 Write Uncorrectable Command: Not Supported 00:13:44.874 Dataset Management Command: Not Supported 00:13:44.874 Write Zeroes Command: Not Supported 00:13:44.874 Set Features Save Field: Not Supported 00:13:44.874 Reservations: Not Supported 00:13:44.874 Timestamp: Not Supported 00:13:44.874 Copy: Not Supported 00:13:44.874 Volatile Write Cache: Not Present 00:13:44.874 Atomic Write Unit (Normal): 1 00:13:44.874 Atomic Write Unit (PFail): 1 00:13:44.874 Atomic Compare & Write Unit: 1 00:13:44.874 Fused Compare & Write: Supported 00:13:44.874 Scatter-Gather List 00:13:44.874 SGL Command Set: Supported 00:13:44.874 SGL Keyed: Supported 00:13:44.874 SGL Bit Bucket Descriptor: Not Supported 00:13:44.874 SGL Metadata Pointer: Not Supported 00:13:44.874 Oversized SGL: Not Supported 00:13:44.874 SGL Metadata Address: Not Supported 00:13:44.874 SGL Offset: Supported 00:13:44.874 Transport SGL Data Block: Not Supported 00:13:44.874 Replay Protected Memory Block: Not Supported 00:13:44.874 00:13:44.874 Firmware Slot Information 00:13:44.874 ========================= 00:13:44.874 Active slot: 0 00:13:44.874 00:13:44.874 00:13:44.874 Error Log 00:13:44.874 ========= 00:13:44.874 00:13:44.874 Active Namespaces 00:13:44.874 ================= 00:13:44.874 Discovery Log Page 00:13:44.874 ================== 00:13:44.874 Generation Counter: 2 00:13:44.874 Number of Records: 2 00:13:44.874 Record Format: 0 00:13:44.874 00:13:44.874 Discovery Log Entry 0 00:13:44.874 ---------------------- 00:13:44.874 Transport Type: 3 (TCP) 00:13:44.874 Address Family: 1 (IPv4) 00:13:44.874 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:44.874 Entry Flags: 00:13:44.874 Duplicate Returned Information: 1 00:13:44.874 Explicit Persistent Connection Support for Discovery: 1 00:13:44.874 Transport Requirements: 00:13:44.874 Secure Channel: Not Required 00:13:44.874 Port ID: 0 (0x0000) 00:13:44.874 Controller ID: 65535 (0xffff) 00:13:44.874 Admin Max SQ Size: 128 00:13:44.874 Transport Service Identifier: 4420 00:13:44.874 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:44.874 Transport Address: 10.0.0.2 00:13:44.874 Discovery Log Entry 1 00:13:44.874 ---------------------- 00:13:44.874 Transport Type: 3 (TCP) 00:13:44.874 Address Family: 1 (IPv4) 00:13:44.874 Subsystem Type: 2 (NVM Subsystem) 00:13:44.874 Entry Flags: 00:13:44.874 Duplicate Returned Information: 0 00:13:44.874 Explicit Persistent Connection Support for Discovery: 0 00:13:44.874 Transport Requirements: 00:13:44.874 Secure Channel: Not Required 00:13:44.874 Port ID: 0 (0x0000) 00:13:44.874 Controller ID: 65535 (0xffff) 00:13:44.874 Admin Max SQ Size: 128 00:13:44.874 Transport Service Identifier: 4420 00:13:44.874 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:44.875 Transport Address: 10.0.0.2 [2024-07-12 16:17:28.353179] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:44.875 [2024-07-12 16:17:28.353202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5840) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.875 [2024-07-12 16:17:28.353224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b59c0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.875 [2024-07-12 16:17:28.353235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5b40) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.875 [2024-07-12 16:17:28.353246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.875 [2024-07-12 16:17:28.353262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.875 [2024-07-12 16:17:28.353280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.353311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.875 [2024-07-12 16:17:28.353370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.353378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.353382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.875 [2024-07-12 16:17:28.353411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.353434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.875 [2024-07-12 16:17:28.353499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.353513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.353517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353527] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:44.875 [2024-07-12 16:17:28.353533] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:44.875 [2024-07-12 16:17:28.353544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.875 [2024-07-12 16:17:28.353561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.353580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.875 [2024-07-12 16:17:28.353628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.353650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.353655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.875 [2024-07-12 16:17:28.353690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.353709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.875 [2024-07-12 16:17:28.353757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.353770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.353774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.353790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.353799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.875 [2024-07-12 16:17:28.353807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.353825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.875 [2024-07-12 16:17:28.357911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.357946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.357955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.357963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.357985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.357996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.358002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2172a60) 00:13:44.875 [2024-07-12 16:17:28.358017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.358059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b5cc0, cid 3, qid 0 00:13:44.875 [2024-07-12 16:17:28.358112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.358124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.358131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.358139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b5cc0) on tqpair=0x2172a60 00:13:44.875 [2024-07-12 16:17:28.358155] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:13:44.875 00:13:44.875 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:44.875 [2024-07-12 16:17:28.401628] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:44.875 [2024-07-12 16:17:28.401682] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74259 ] 00:13:44.875 [2024-07-12 16:17:28.541592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:44.875 [2024-07-12 16:17:28.541655] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:44.875 [2024-07-12 16:17:28.541663] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:44.875 [2024-07-12 16:17:28.541675] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:44.875 [2024-07-12 16:17:28.541683] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:44.875 [2024-07-12 16:17:28.541807] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:44.875 [2024-07-12 16:17:28.541858] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1599a60 0 00:13:44.875 [2024-07-12 16:17:28.545924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:44.875 [2024-07-12 16:17:28.545937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:44.875 [2024-07-12 16:17:28.545942] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:44.875 [2024-07-12 16:17:28.545946] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:44.875 [2024-07-12 16:17:28.545991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.545999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.546003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.875 [2024-07-12 16:17:28.546017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:44.875 [2024-07-12 16:17:28.546052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.875 [2024-07-12 16:17:28.552928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.552951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.552957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.552962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.875 [2024-07-12 16:17:28.552978] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:44.875 [2024-07-12 16:17:28.552987] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:44.875 [2024-07-12 16:17:28.552995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:44.875 [2024-07-12 16:17:28.553013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.553019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.553023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.875 [2024-07-12 16:17:28.553033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.875 [2024-07-12 16:17:28.553060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.875 [2024-07-12 16:17:28.553393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.875 [2024-07-12 16:17:28.553410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.875 [2024-07-12 16:17:28.553415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.553419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.875 [2024-07-12 16:17:28.553425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:44.875 [2024-07-12 16:17:28.553435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:44.875 [2024-07-12 16:17:28.553443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.553448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.875 [2024-07-12 16:17:28.553452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.553460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.876 [2024-07-12 16:17:28.553480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.553764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.553778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.553783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.553788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.876 [2024-07-12 16:17:28.553794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:44.876 [2024-07-12 16:17:28.553804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.876 [2024-07-12 16:17:28.553812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.553817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.553821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.553829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.876 [2024-07-12 16:17:28.553849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.554116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.554131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.554136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.554140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.876 [2024-07-12 16:17:28.554146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.876 [2024-07-12 16:17:28.554158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.554163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.554167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.554175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.876 [2024-07-12 16:17:28.554196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.554472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.554486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.554491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.554496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.876 [2024-07-12 16:17:28.554501] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:44.876 [2024-07-12 16:17:28.554506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:44.876 [2024-07-12 16:17:28.554516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.876 [2024-07-12 16:17:28.554622] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:44.876 [2024-07-12 16:17:28.554627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.876 [2024-07-12 16:17:28.554636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.554641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.554645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.554653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.876 [2024-07-12 16:17:28.554674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.555025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.555037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.555046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.876 [2024-07-12 16:17:28.555056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.876 [2024-07-12 16:17:28.555068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.555085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.876 [2024-07-12 16:17:28.555106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.555445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.555459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.555464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.876 [2024-07-12 16:17:28.555473] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.876 [2024-07-12 16:17:28.555479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:44.876 [2024-07-12 16:17:28.555488] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:44.876 [2024-07-12 16:17:28.555499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.876 [2024-07-12 16:17:28.555511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.555524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.876 [2024-07-12 16:17:28.555544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.555847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.876 [2024-07-12 16:17:28.555873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.876 [2024-07-12 16:17:28.555879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555883] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=4096, cccid=0 00:13:44.876 [2024-07-12 16:17:28.555889] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dc840) on tqpair(0x1599a60): expected_datao=0, payload_size=4096 00:13:44.876 [2024-07-12 16:17:28.555895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555903] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555908] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.555924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.555928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.876 [2024-07-12 16:17:28.555942] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:44.876 [2024-07-12 16:17:28.555947] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:44.876 [2024-07-12 16:17:28.555952] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:44.876 [2024-07-12 16:17:28.555957] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:44.876 [2024-07-12 16:17:28.555962] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:44.876 [2024-07-12 16:17:28.555968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:44.876 [2024-07-12 16:17:28.555978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.876 [2024-07-12 16:17:28.555986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.876 [2024-07-12 16:17:28.555995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.876 [2024-07-12 16:17:28.556004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.876 [2024-07-12 16:17:28.556026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.876 [2024-07-12 16:17:28.556427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.876 [2024-07-12 16:17:28.556443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.876 [2024-07-12 16:17:28.556448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.877 [2024-07-12 16:17:28.556461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.556477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.877 [2024-07-12 16:17:28.556484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.556498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.877 [2024-07-12 16:17:28.556505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.556519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.877 [2024-07-12 16:17:28.556525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.556539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.877 [2024-07-12 16:17:28.556545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.556559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.556568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.556572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.556580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.877 [2024-07-12 16:17:28.556604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc840, cid 0, qid 0 00:13:44.877 [2024-07-12 16:17:28.556611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dc9c0, cid 1, qid 0 00:13:44.877 [2024-07-12 16:17:28.556616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dcb40, cid 2, qid 0 00:13:44.877 [2024-07-12 16:17:28.556621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.877 [2024-07-12 16:17:28.556626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.877 [2024-07-12 16:17:28.559908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.877 [2024-07-12 16:17:28.559925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.877 [2024-07-12 16:17:28.559930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.559935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.877 [2024-07-12 16:17:28.559941] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:44.877 [2024-07-12 16:17:28.559953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.559965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.559973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.559981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.559986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.559990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.559999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.877 [2024-07-12 16:17:28.560024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.877 [2024-07-12 16:17:28.560323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.877 [2024-07-12 16:17:28.560338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.877 [2024-07-12 16:17:28.560343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.877 [2024-07-12 16:17:28.560414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.560427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.560436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.560449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.877 [2024-07-12 16:17:28.560471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.877 [2024-07-12 16:17:28.560784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.877 [2024-07-12 16:17:28.560800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.877 [2024-07-12 16:17:28.560805] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=4096, cccid=4 00:13:44.877 [2024-07-12 16:17:28.560815] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dce40) on tqpair(0x1599a60): expected_datao=0, payload_size=4096 00:13:44.877 [2024-07-12 16:17:28.560820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560832] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.877 [2024-07-12 16:17:28.560848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.877 [2024-07-12 16:17:28.560852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.877 [2024-07-12 16:17:28.560884] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:44.877 [2024-07-12 16:17:28.560898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.560910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.560919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.560924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.560932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.877 [2024-07-12 16:17:28.560956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.877 [2024-07-12 16:17:28.561337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.877 [2024-07-12 16:17:28.561353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.877 [2024-07-12 16:17:28.561358] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.561362] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=4096, cccid=4 00:13:44.877 [2024-07-12 16:17:28.561367] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dce40) on tqpair(0x1599a60): expected_datao=0, payload_size=4096 00:13:44.877 [2024-07-12 16:17:28.561372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.561380] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.561384] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.561393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.877 [2024-07-12 16:17:28.561400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.877 [2024-07-12 16:17:28.561404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.561408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.877 [2024-07-12 16:17:28.561424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.561437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.561446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.561451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.877 [2024-07-12 16:17:28.561459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.877 [2024-07-12 16:17:28.561481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.877 [2024-07-12 16:17:28.561988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.877 [2024-07-12 16:17:28.562003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.877 [2024-07-12 16:17:28.562008] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.562012] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=4096, cccid=4 00:13:44.877 [2024-07-12 16:17:28.562018] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dce40) on tqpair(0x1599a60): expected_datao=0, payload_size=4096 00:13:44.877 [2024-07-12 16:17:28.562022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.562030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.562035] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.562044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.877 [2024-07-12 16:17:28.562050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.877 [2024-07-12 16:17:28.562054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.877 [2024-07-12 16:17:28.562058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.877 [2024-07-12 16:17:28.562068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.562077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.562088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.562096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.562101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.562108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:44.877 [2024-07-12 16:17:28.562114] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.878 [2024-07-12 16:17:28.562119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:44.878 [2024-07-12 16:17:28.562125] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:44.878 [2024-07-12 16:17:28.562141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.562146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.562154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.562162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.562166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.562170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.562177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.878 [2024-07-12 16:17:28.562204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.878 [2024-07-12 16:17:28.562212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dcfc0, cid 5, qid 0 00:13:44.878 [2024-07-12 16:17:28.562670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.562684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 [2024-07-12 16:17:28.562689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.562693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.878 [2024-07-12 16:17:28.562701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.562707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 [2024-07-12 16:17:28.562711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.562715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dcfc0) on tqpair=0x1599a60 00:13:44.878 [2024-07-12 16:17:28.562727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.562732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.562739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.562759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dcfc0, cid 5, qid 0 00:13:44.878 [2024-07-12 16:17:28.563080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.563096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 [2024-07-12 16:17:28.563100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dcfc0) on tqpair=0x1599a60 00:13:44.878 [2024-07-12 16:17:28.563117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.563129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.563150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dcfc0, cid 5, qid 0 00:13:44.878 [2024-07-12 16:17:28.563319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.563333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 [2024-07-12 16:17:28.563338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dcfc0) on tqpair=0x1599a60 00:13:44.878 [2024-07-12 16:17:28.563354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.563366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.563385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dcfc0, cid 5, qid 0 00:13:44.878 [2024-07-12 16:17:28.563460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.563467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 [2024-07-12 16:17:28.563471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dcfc0) on tqpair=0x1599a60 00:13:44.878 [2024-07-12 16:17:28.563494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.563508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.563520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.563530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.563538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.563549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.563561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.563566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1599a60) 00:13:44.878 [2024-07-12 16:17:28.563572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.878 [2024-07-12 16:17:28.563593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dcfc0, cid 5, qid 0 00:13:44.878 [2024-07-12 16:17:28.563601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dce40, cid 4, qid 0 00:13:44.878 [2024-07-12 16:17:28.563606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dd140, cid 6, qid 0 00:13:44.878 [2024-07-12 16:17:28.563611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dd2c0, cid 7, qid 0 00:13:44.878 [2024-07-12 16:17:28.566907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.878 [2024-07-12 16:17:28.566925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.878 [2024-07-12 16:17:28.566930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566934] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=8192, cccid=5 00:13:44.878 [2024-07-12 16:17:28.566939] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dcfc0) on tqpair(0x1599a60): expected_datao=0, payload_size=8192 00:13:44.878 [2024-07-12 16:17:28.566945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566953] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566957] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.878 [2024-07-12 16:17:28.566970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.878 [2024-07-12 16:17:28.566974] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566977] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=512, cccid=4 00:13:44.878 [2024-07-12 16:17:28.566982] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dce40) on tqpair(0x1599a60): expected_datao=0, payload_size=512 00:13:44.878 [2024-07-12 16:17:28.566987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566994] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.566998] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.878 [2024-07-12 16:17:28.567010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.878 [2024-07-12 16:17:28.567014] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567017] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=512, cccid=6 00:13:44.878 [2024-07-12 16:17:28.567022] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dd140) on tqpair(0x1599a60): expected_datao=0, payload_size=512 00:13:44.878 [2024-07-12 16:17:28.567027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567033] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567037] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.878 [2024-07-12 16:17:28.567049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.878 [2024-07-12 16:17:28.567053] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567057] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1599a60): datao=0, datal=4096, cccid=7 00:13:44.878 [2024-07-12 16:17:28.567061] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15dd2c0) on tqpair(0x1599a60): expected_datao=0, payload_size=4096 00:13:44.878 [2024-07-12 16:17:28.567066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567073] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567077] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.567089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 [2024-07-12 16:17:28.567093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.878 [2024-07-12 16:17:28.567097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dcfc0) on tqpair=0x1599a60 00:13:44.878 [2024-07-12 16:17:28.567117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.878 [2024-07-12 16:17:28.567125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.878 ===================================================== 00:13:44.878 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.878 ===================================================== 00:13:44.878 Controller Capabilities/Features 00:13:44.878 ================================ 00:13:44.878 Vendor ID: 8086 00:13:44.878 Subsystem Vendor ID: 8086 00:13:44.878 Serial Number: SPDK00000000000001 00:13:44.878 Model Number: SPDK bdev Controller 00:13:44.878 Firmware Version: 24.09 00:13:44.878 Recommended Arb Burst: 6 00:13:44.878 IEEE OUI Identifier: e4 d2 5c 00:13:44.878 Multi-path I/O 00:13:44.878 May have multiple subsystem ports: Yes 00:13:44.878 May have multiple controllers: Yes 00:13:44.878 Associated with SR-IOV VF: No 00:13:44.878 Max Data Transfer Size: 131072 00:13:44.878 Max Number of Namespaces: 32 00:13:44.878 Max Number of I/O Queues: 127 00:13:44.878 NVMe Specification Version (VS): 1.3 00:13:44.878 NVMe Specification Version (Identify): 1.3 00:13:44.878 Maximum Queue Entries: 128 00:13:44.878 Contiguous Queues Required: Yes 00:13:44.878 Arbitration Mechanisms Supported 00:13:44.878 Weighted Round Robin: Not Supported 00:13:44.878 Vendor Specific: Not Supported 00:13:44.878 Reset Timeout: 15000 ms 00:13:44.879 Doorbell Stride: 4 bytes 00:13:44.879 NVM Subsystem Reset: Not Supported 00:13:44.879 Command Sets Supported 00:13:44.879 NVM Command Set: Supported 00:13:44.879 Boot Partition: Not Supported 00:13:44.879 Memory Page Size Minimum: 4096 bytes 00:13:44.879 Memory Page Size Maximum: 4096 bytes 00:13:44.879 Persistent Memory Region: Not Supported 00:13:44.879 Optional Asynchronous Events Supported 00:13:44.879 Namespace Attribute Notices: Supported 00:13:44.879 Firmware Activation Notices: Not Supported 00:13:44.879 ANA Change Notices: Not Supported 00:13:44.879 PLE Aggregate Log Change Notices: Not Supported 00:13:44.879 LBA Status Info Alert Notices: Not Supported 00:13:44.879 EGE Aggregate Log Change Notices: Not Supported 00:13:44.879 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.879 Zone Descriptor Change Notices: Not Supported 00:13:44.879 Discovery Log Change Notices: Not Supported 00:13:44.879 Controller Attributes 00:13:44.879 128-bit Host Identifier: Supported 00:13:44.879 Non-Operational Permissive Mode: Not Supported 00:13:44.879 NVM Sets: Not Supported 00:13:44.879 Read Recovery Levels: Not Supported 00:13:44.879 Endurance Groups: Not Supported 00:13:44.879 Predictable Latency Mode: Not Supported 00:13:44.879 Traffic Based Keep ALive: Not Supported 00:13:44.879 Namespace Granularity: Not Supported 00:13:44.879 SQ Associations: Not Supported 00:13:44.879 UUID List: Not Supported 00:13:44.879 Multi-Domain Subsystem: Not Supported 00:13:44.879 Fixed Capacity Management: Not Supported 00:13:44.879 Variable Capacity Management: Not Supported 00:13:44.879 Delete Endurance Group: Not Supported 00:13:44.879 Delete NVM Set: Not Supported 00:13:44.879 Extended LBA Formats Supported: Not Supported 00:13:44.879 Flexible Data Placement Supported: Not Supported 00:13:44.879 00:13:44.879 Controller Memory Buffer Support 00:13:44.879 ================================ 00:13:44.879 Supported: No 00:13:44.879 00:13:44.879 Persistent Memory Region Support 00:13:44.879 ================================ 00:13:44.879 Supported: No 00:13:44.879 00:13:44.879 Admin Command Set Attributes 00:13:44.879 ============================ 00:13:44.879 Security Send/Receive: Not Supported 00:13:44.879 Format NVM: Not Supported 00:13:44.879 Firmware Activate/Download: Not Supported 00:13:44.879 Namespace Management: Not Supported 00:13:44.879 Device Self-Test: Not Supported 00:13:44.879 Directives: Not Supported 00:13:44.879 NVMe-MI: Not Supported 00:13:44.879 Virtualization Management: Not Supported 00:13:44.879 Doorbell Buffer Config: Not Supported 00:13:44.879 Get LBA Status Capability: Not Supported 00:13:44.879 Command & Feature Lockdown Capability: Not Supported 00:13:44.879 Abort Command Limit: 4 00:13:44.879 Async Event Request Limit: 4 00:13:44.879 Number of Firmware Slots: N/A 00:13:44.879 Firmware Slot 1 Read-Only: N/A 00:13:44.879 Firmware Activation Without Reset: [2024-07-12 16:17:28.567129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.879 [2024-07-12 16:17:28.567133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dce40) on tqpair=0x1599a60 00:13:44.879 [2024-07-12 16:17:28.567146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.879 [2024-07-12 16:17:28.567152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.879 [2024-07-12 16:17:28.567156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.879 [2024-07-12 16:17:28.567160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dd140) on tqpair=0x1599a60 00:13:44.879 [2024-07-12 16:17:28.567168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.879 [2024-07-12 16:17:28.567175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.879 [2024-07-12 16:17:28.567179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.879 [2024-07-12 16:17:28.567183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dd2c0) on tqpair=0x1599a60 00:13:44.879 N/A 00:13:44.879 Multiple Update Detection Support: N/A 00:13:44.879 Firmware Update Granularity: No Information Provided 00:13:44.879 Per-Namespace SMART Log: No 00:13:44.879 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.879 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:44.879 Command Effects Log Page: Supported 00:13:44.879 Get Log Page Extended Data: Supported 00:13:44.879 Telemetry Log Pages: Not Supported 00:13:44.879 Persistent Event Log Pages: Not Supported 00:13:44.879 Supported Log Pages Log Page: May Support 00:13:44.879 Commands Supported & Effects Log Page: Not Supported 00:13:44.879 Feature Identifiers & Effects Log Page:May Support 00:13:44.879 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.879 Data Area 4 for Telemetry Log: Not Supported 00:13:44.879 Error Log Page Entries Supported: 128 00:13:44.879 Keep Alive: Supported 00:13:44.879 Keep Alive Granularity: 10000 ms 00:13:44.879 00:13:44.879 NVM Command Set Attributes 00:13:44.879 ========================== 00:13:44.879 Submission Queue Entry Size 00:13:44.879 Max: 64 00:13:44.879 Min: 64 00:13:44.879 Completion Queue Entry Size 00:13:44.879 Max: 16 00:13:44.879 Min: 16 00:13:44.879 Number of Namespaces: 32 00:13:44.879 Compare Command: Supported 00:13:44.879 Write Uncorrectable Command: Not Supported 00:13:44.879 Dataset Management Command: Supported 00:13:44.879 Write Zeroes Command: Supported 00:13:44.879 Set Features Save Field: Not Supported 00:13:44.879 Reservations: Supported 00:13:44.879 Timestamp: Not Supported 00:13:44.879 Copy: Supported 00:13:44.879 Volatile Write Cache: Present 00:13:44.879 Atomic Write Unit (Normal): 1 00:13:44.879 Atomic Write Unit (PFail): 1 00:13:44.879 Atomic Compare & Write Unit: 1 00:13:44.879 Fused Compare & Write: Supported 00:13:44.879 Scatter-Gather List 00:13:44.879 SGL Command Set: Supported 00:13:44.879 SGL Keyed: Supported 00:13:44.879 SGL Bit Bucket Descriptor: Not Supported 00:13:44.879 SGL Metadata Pointer: Not Supported 00:13:44.879 Oversized SGL: Not Supported 00:13:44.879 SGL Metadata Address: Not Supported 00:13:44.879 SGL Offset: Supported 00:13:44.879 Transport SGL Data Block: Not Supported 00:13:44.879 Replay Protected Memory Block: Not Supported 00:13:44.879 00:13:44.879 Firmware Slot Information 00:13:44.879 ========================= 00:13:44.879 Active slot: 1 00:13:44.879 Slot 1 Firmware Revision: 24.09 00:13:44.879 00:13:44.879 00:13:44.879 Commands Supported and Effects 00:13:44.879 ============================== 00:13:44.879 Admin Commands 00:13:44.879 -------------- 00:13:44.879 Get Log Page (02h): Supported 00:13:44.879 Identify (06h): Supported 00:13:44.879 Abort (08h): Supported 00:13:44.879 Set Features (09h): Supported 00:13:44.879 Get Features (0Ah): Supported 00:13:44.879 Asynchronous Event Request (0Ch): Supported 00:13:44.879 Keep Alive (18h): Supported 00:13:44.879 I/O Commands 00:13:44.879 ------------ 00:13:44.879 Flush (00h): Supported LBA-Change 00:13:44.879 Write (01h): Supported LBA-Change 00:13:44.879 Read (02h): Supported 00:13:44.879 Compare (05h): Supported 00:13:44.879 Write Zeroes (08h): Supported LBA-Change 00:13:44.879 Dataset Management (09h): Supported LBA-Change 00:13:44.879 Copy (19h): Supported LBA-Change 00:13:44.879 00:13:44.879 Error Log 00:13:44.879 ========= 00:13:44.879 00:13:44.879 Arbitration 00:13:44.879 =========== 00:13:44.879 Arbitration Burst: 1 00:13:44.879 00:13:44.879 Power Management 00:13:44.879 ================ 00:13:44.879 Number of Power States: 1 00:13:44.879 Current Power State: Power State #0 00:13:44.879 Power State #0: 00:13:44.879 Max Power: 0.00 W 00:13:44.879 Non-Operational State: Operational 00:13:44.879 Entry Latency: Not Reported 00:13:44.879 Exit Latency: Not Reported 00:13:44.879 Relative Read Throughput: 0 00:13:44.879 Relative Read Latency: 0 00:13:44.879 Relative Write Throughput: 0 00:13:44.879 Relative Write Latency: 0 00:13:44.879 Idle Power: Not Reported 00:13:44.879 Active Power: Not Reported 00:13:44.879 Non-Operational Permissive Mode: Not Supported 00:13:44.879 00:13:44.879 Health Information 00:13:44.879 ================== 00:13:44.879 Critical Warnings: 00:13:44.879 Available Spare Space: OK 00:13:44.879 Temperature: OK 00:13:44.879 Device Reliability: OK 00:13:44.879 Read Only: No 00:13:44.879 Volatile Memory Backup: OK 00:13:44.879 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:44.879 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:44.879 Available Spare: 0% 00:13:44.879 Available Spare Threshold: 0% 00:13:44.879 Life Percentage Used:[2024-07-12 16:17:28.567294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.879 [2024-07-12 16:17:28.567301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1599a60) 00:13:44.879 [2024-07-12 16:17:28.567311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.879 [2024-07-12 16:17:28.567339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dd2c0, cid 7, qid 0 00:13:44.879 [2024-07-12 16:17:28.567675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.879 [2024-07-12 16:17:28.567692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.879 [2024-07-12 16:17:28.567697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.879 [2024-07-12 16:17:28.567701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dd2c0) on tqpair=0x1599a60 00:13:44.879 [2024-07-12 16:17:28.567741] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:44.879 [2024-07-12 16:17:28.567753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc840) on tqpair=0x1599a60 00:13:44.879 [2024-07-12 16:17:28.567760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.879 [2024-07-12 16:17:28.567766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dc9c0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.567771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.880 [2024-07-12 16:17:28.567777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dcb40) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.567782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.880 [2024-07-12 16:17:28.567787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.567792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.880 [2024-07-12 16:17:28.567802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.567806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.567811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.567819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.567842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.568274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.568291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.568296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.568301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.568309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.568314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.568318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.568326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.568351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.568674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.568689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.568693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.568698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.568703] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:44.880 [2024-07-12 16:17:28.568709] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:44.880 [2024-07-12 16:17:28.568720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.568725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.568729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.568737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.568756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.569039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.569054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.569058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.569075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.569091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.569112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.569406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.569420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.569425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.569440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.569457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.569476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.569763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.569777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.569782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.569797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.569806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.569814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.569832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.570121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.570136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.570140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.570156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.570173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.570193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.570412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.570423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.570427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.570443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.570460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.570478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.570807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.570821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.570825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.570841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.570850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1599a60) 00:13:44.880 [2024-07-12 16:17:28.570857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.880 [2024-07-12 16:17:28.573964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15dccc0, cid 3, qid 0 00:13:44.880 [2024-07-12 16:17:28.574028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.880 [2024-07-12 16:17:28.574036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.880 [2024-07-12 16:17:28.574040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.880 [2024-07-12 16:17:28.574044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15dccc0) on tqpair=0x1599a60 00:13:44.880 [2024-07-12 16:17:28.574054] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:13:44.880 0% 00:13:44.880 Data Units Read: 0 00:13:44.880 Data Units Written: 0 00:13:44.880 Host Read Commands: 0 00:13:44.880 Host Write Commands: 0 00:13:44.880 Controller Busy Time: 0 minutes 00:13:44.880 Power Cycles: 0 00:13:44.880 Power On Hours: 0 hours 00:13:44.880 Unsafe Shutdowns: 0 00:13:44.880 Unrecoverable Media Errors: 0 00:13:44.880 Lifetime Error Log Entries: 0 00:13:44.880 Warning Temperature Time: 0 minutes 00:13:44.880 Critical Temperature Time: 0 minutes 00:13:44.880 00:13:44.880 Number of Queues 00:13:44.880 ================ 00:13:44.880 Number of I/O Submission Queues: 127 00:13:44.881 Number of I/O Completion Queues: 127 00:13:44.881 00:13:44.881 Active Namespaces 00:13:44.881 ================= 00:13:44.881 Namespace ID:1 00:13:44.881 Error Recovery Timeout: Unlimited 00:13:44.881 Command Set Identifier: NVM (00h) 00:13:44.881 Deallocate: Supported 00:13:44.881 Deallocated/Unwritten Error: Not Supported 00:13:44.881 Deallocated Read Value: Unknown 00:13:44.881 Deallocate in Write Zeroes: Not Supported 00:13:44.881 Deallocated Guard Field: 0xFFFF 00:13:44.881 Flush: Supported 00:13:44.881 Reservation: Supported 00:13:44.881 Namespace Sharing Capabilities: Multiple Controllers 00:13:44.881 Size (in LBAs): 131072 (0GiB) 00:13:44.881 Capacity (in LBAs): 131072 (0GiB) 00:13:44.881 Utilization (in LBAs): 131072 (0GiB) 00:13:44.881 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:44.881 EUI64: ABCDEF0123456789 00:13:44.881 UUID: 8025eefa-05af-4400-9c10-bd2eceb43c7b 00:13:44.881 Thin Provisioning: Not Supported 00:13:44.881 Per-NS Atomic Units: Yes 00:13:44.881 Atomic Boundary Size (Normal): 0 00:13:44.881 Atomic Boundary Size (PFail): 0 00:13:44.881 Atomic Boundary Offset: 0 00:13:44.881 Maximum Single Source Range Length: 65535 00:13:44.881 Maximum Copy Length: 65535 00:13:44.881 Maximum Source Range Count: 1 00:13:44.881 NGUID/EUI64 Never Reused: No 00:13:44.881 Namespace Write Protected: No 00:13:44.881 Number of LBA Formats: 1 00:13:44.881 Current LBA Format: LBA Format #00 00:13:44.881 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:44.881 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.139 rmmod nvme_tcp 00:13:45.139 rmmod nvme_fabrics 00:13:45.139 rmmod nvme_keyring 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74229 ']' 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74229 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74229 ']' 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74229 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74229 00:13:45.139 killing process with pid 74229 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74229' 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74229 00:13:45.139 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74229 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:45.398 00:13:45.398 real 0m1.765s 00:13:45.398 user 0m3.948s 00:13:45.398 sys 0m0.574s 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.398 ************************************ 00:13:45.398 END TEST nvmf_identify 00:13:45.398 ************************************ 00:13:45.398 16:17:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:45.398 16:17:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:45.398 16:17:28 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:45.398 16:17:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:45.398 16:17:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.398 16:17:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.398 ************************************ 00:13:45.398 START TEST nvmf_perf 00:13:45.398 ************************************ 00:13:45.398 16:17:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:45.398 * Looking for test storage... 00:13:45.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:45.398 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:45.655 Cannot find device "nvmf_tgt_br" 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.655 Cannot find device "nvmf_tgt_br2" 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:45.655 Cannot find device "nvmf_tgt_br" 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:45.655 Cannot find device "nvmf_tgt_br2" 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:13:45.655 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:45.656 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:45.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:45.913 00:13:45.913 --- 10.0.0.2 ping statistics --- 00:13:45.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.913 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:45.913 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:45.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:13:45.913 00:13:45.914 --- 10.0.0.3 ping statistics --- 00:13:45.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.914 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:45.914 00:13:45.914 --- 10.0.0.1 ping statistics --- 00:13:45.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.914 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74424 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74424 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74424 ']' 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.914 16:17:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:45.914 [2024-07-12 16:17:29.531036] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:13:45.914 [2024-07-12 16:17:29.531144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.192 [2024-07-12 16:17:29.672392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.192 [2024-07-12 16:17:29.733116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.192 [2024-07-12 16:17:29.733159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.192 [2024-07-12 16:17:29.733185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.192 [2024-07-12 16:17:29.733209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.192 [2024-07-12 16:17:29.733231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.192 [2024-07-12 16:17:29.733355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.192 [2024-07-12 16:17:29.733978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.192 [2024-07-12 16:17:29.734139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.192 [2024-07-12 16:17:29.734242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.192 [2024-07-12 16:17:29.763359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:47.123 16:17:30 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:47.380 16:17:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:47.380 16:17:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:47.638 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:47.638 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.906 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:47.906 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:47.906 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:47.906 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:47.906 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.162 [2024-07-12 16:17:31.718523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.162 16:17:31 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:48.420 16:17:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:48.420 16:17:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.677 16:17:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:48.677 16:17:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:48.934 16:17:32 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.192 [2024-07-12 16:17:32.811917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.192 16:17:32 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.449 16:17:33 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:49.449 16:17:33 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:49.449 16:17:33 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:49.449 16:17:33 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:50.824 Initializing NVMe Controllers 00:13:50.824 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:50.824 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:50.824 Initialization complete. Launching workers. 00:13:50.824 ======================================================== 00:13:50.824 Latency(us) 00:13:50.824 Device Information : IOPS MiB/s Average min max 00:13:50.824 PCIE (0000:00:10.0) NSID 1 from core 0: 23384.63 91.35 1368.42 345.44 8104.45 00:13:50.824 ======================================================== 00:13:50.824 Total : 23384.63 91.35 1368.42 345.44 8104.45 00:13:50.824 00:13:50.824 16:17:34 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:51.772 Initializing NVMe Controllers 00:13:51.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:51.772 Initialization complete. Launching workers. 00:13:51.772 ======================================================== 00:13:51.772 Latency(us) 00:13:51.772 Device Information : IOPS MiB/s Average min max 00:13:51.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3463.99 13.53 288.30 99.52 7230.08 00:13:51.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8170.35 5901.03 12061.49 00:13:51.772 ======================================================== 00:13:51.772 Total : 3586.99 14.01 558.58 99.52 12061.49 00:13:51.772 00:13:52.030 16:17:35 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:53.406 Initializing NVMe Controllers 00:13:53.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:53.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:53.406 Initialization complete. Launching workers. 00:13:53.406 ======================================================== 00:13:53.406 Latency(us) 00:13:53.406 Device Information : IOPS MiB/s Average min max 00:13:53.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8478.98 33.12 3779.18 666.22 9523.64 00:13:53.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4006.99 15.65 8023.90 5438.50 16407.28 00:13:53.406 ======================================================== 00:13:53.406 Total : 12485.98 48.77 5141.39 666.22 16407.28 00:13:53.406 00:13:53.406 16:17:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:53.406 16:17:36 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:55.937 Initializing NVMe Controllers 00:13:55.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.937 Controller IO queue size 128, less than required. 00:13:55.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.937 Controller IO queue size 128, less than required. 00:13:55.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:55.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:55.937 Initialization complete. Launching workers. 00:13:55.937 ======================================================== 00:13:55.937 Latency(us) 00:13:55.937 Device Information : IOPS MiB/s Average min max 00:13:55.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.02 429.51 75460.69 47346.59 110282.08 00:13:55.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.73 168.18 201193.21 68873.29 316996.27 00:13:55.937 ======================================================== 00:13:55.937 Total : 2390.75 597.69 110840.20 47346.59 316996.27 00:13:55.937 00:13:55.937 16:17:39 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:56.195 Initializing NVMe Controllers 00:13:56.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.195 Controller IO queue size 128, less than required. 00:13:56.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.195 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:56.195 Controller IO queue size 128, less than required. 00:13:56.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.195 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:56.195 WARNING: Some requested NVMe devices were skipped 00:13:56.195 No valid NVMe controllers or AIO or URING devices found 00:13:56.195 16:17:39 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:58.729 Initializing NVMe Controllers 00:13:58.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.729 Controller IO queue size 128, less than required. 00:13:58.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.729 Controller IO queue size 128, less than required. 00:13:58.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.729 Initialization complete. Launching workers. 00:13:58.729 00:13:58.729 ==================== 00:13:58.729 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:58.729 TCP transport: 00:13:58.729 polls: 9742 00:13:58.729 idle_polls: 5632 00:13:58.729 sock_completions: 4110 00:13:58.729 nvme_completions: 6713 00:13:58.729 submitted_requests: 10072 00:13:58.729 queued_requests: 1 00:13:58.729 00:13:58.729 ==================== 00:13:58.729 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:58.729 TCP transport: 00:13:58.729 polls: 9805 00:13:58.729 idle_polls: 4958 00:13:58.729 sock_completions: 4847 00:13:58.729 nvme_completions: 6747 00:13:58.729 submitted_requests: 10086 00:13:58.729 queued_requests: 1 00:13:58.729 ======================================================== 00:13:58.729 Latency(us) 00:13:58.729 Device Information : IOPS MiB/s Average min max 00:13:58.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1677.92 419.48 77851.73 42372.44 124784.43 00:13:58.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1686.42 421.60 76136.72 29979.17 113177.78 00:13:58.729 ======================================================== 00:13:58.729 Total : 3364.34 841.09 76992.06 29979.17 124784.43 00:13:58.729 00:13:58.729 16:17:42 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:13:58.729 16:17:42 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.988 rmmod nvme_tcp 00:13:58.988 rmmod nvme_fabrics 00:13:58.988 rmmod nvme_keyring 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74424 ']' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74424 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74424 ']' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74424 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74424 00:13:58.988 killing process with pid 74424 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74424' 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74424 00:13:58.988 16:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74424 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.554 16:17:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.813 16:17:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:59.813 00:13:59.813 real 0m14.291s 00:13:59.813 user 0m52.546s 00:13:59.813 sys 0m3.915s 00:13:59.813 16:17:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.813 16:17:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:59.813 ************************************ 00:13:59.813 END TEST nvmf_perf 00:13:59.813 ************************************ 00:13:59.813 16:17:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:59.813 16:17:43 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:59.813 16:17:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.813 16:17:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.813 16:17:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.813 ************************************ 00:13:59.813 START TEST nvmf_fio_host 00:13:59.813 ************************************ 00:13:59.813 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:59.813 * Looking for test storage... 00:13:59.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:59.813 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.813 16:17:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.813 16:17:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.813 16:17:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.813 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:59.814 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:59.815 Cannot find device "nvmf_tgt_br" 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:59.815 Cannot find device "nvmf_tgt_br2" 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:59.815 Cannot find device "nvmf_tgt_br" 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:59.815 Cannot find device "nvmf_tgt_br2" 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:13:59.815 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.073 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.073 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.073 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:00.073 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.073 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:00.073 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:00.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:14:00.074 00:14:00.074 --- 10.0.0.2 ping statistics --- 00:14:00.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.074 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:00.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:00.074 00:14:00.074 --- 10.0.0.3 ping statistics --- 00:14:00.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.074 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:00.074 00:14:00.074 --- 10.0.0.1 ping statistics --- 00:14:00.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.074 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74833 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74833 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 74833 ']' 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.074 16:17:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.333 [2024-07-12 16:17:43.844799] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:00.333 [2024-07-12 16:17:43.844914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.333 [2024-07-12 16:17:43.986777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.333 [2024-07-12 16:17:44.045756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.333 [2024-07-12 16:17:44.045827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.333 [2024-07-12 16:17:44.045856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.333 [2024-07-12 16:17:44.045865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.333 [2024-07-12 16:17:44.045873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.333 [2024-07-12 16:17:44.046041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.333 [2024-07-12 16:17:44.047034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.333 [2024-07-12 16:17:44.047123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.333 [2024-07-12 16:17:44.047127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.592 [2024-07-12 16:17:44.078351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:00.592 16:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.592 16:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:00.592 16:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.850 [2024-07-12 16:17:44.382731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.850 16:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:00.850 16:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.850 16:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.850 16:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.109 Malloc1 00:14:01.109 16:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:01.367 16:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:01.625 16:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.884 [2024-07-12 16:17:45.487311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.884 16:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:02.154 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:02.155 16:17:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:02.428 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:02.428 fio-3.35 00:14:02.428 Starting 1 thread 00:14:04.960 00:14:04.960 test: (groupid=0, jobs=1): err= 0: pid=74903: Fri Jul 12 16:17:48 2024 00:14:04.960 read: IOPS=8736, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:14:04.960 slat (nsec): min=1970, max=419910, avg=2824.64, stdev=4500.41 00:14:04.960 clat (usec): min=3995, max=13331, avg=7894.68, stdev=1098.13 00:14:04.960 lat (usec): min=3998, max=13333, avg=7897.50, stdev=1098.71 00:14:04.960 clat percentiles (usec): 00:14:04.960 | 1.00th=[ 5932], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:14:04.960 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:14:04.960 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[ 9372], 95.00th=[ 9896], 00:14:04.960 | 99.00th=[11863], 99.50th=[12125], 99.90th=[13173], 99.95th=[13304], 00:14:04.960 | 99.99th=[13304] 00:14:04.960 bw ( KiB/s): min=32344, max=35936, per=99.98%, avg=34938.00, stdev=1734.31, samples=4 00:14:04.960 iops : min= 8086, max= 8984, avg=8734.50, stdev=433.58, samples=4 00:14:04.960 write: IOPS=8732, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec); 0 zone resets 00:14:04.960 slat (usec): min=2, max=443, avg= 2.98, stdev= 4.07 00:14:04.960 clat (usec): min=3308, max=12805, avg=6707.80, stdev=1232.15 00:14:04.960 lat (usec): min=3310, max=12807, avg=6710.78, stdev=1233.08 00:14:04.960 clat percentiles (usec): 00:14:04.960 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 5997], 00:14:04.960 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:14:04.960 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[ 8029], 00:14:04.960 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12125], 99.95th=[12256], 00:14:04.960 | 99.99th=[12780] 00:14:04.960 bw ( KiB/s): min=33104, max=35840, per=100.00%, avg=34930.00, stdev=1235.88, samples=4 00:14:04.960 iops : min= 8276, max= 8960, avg=8732.50, stdev=308.97, samples=4 00:14:04.960 lat (msec) : 4=0.19%, 10=95.82%, 20=4.00% 00:14:04.960 cpu : usr=67.20%, sys=24.03%, ctx=34, majf=0, minf=7 00:14:04.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:04.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:04.960 issued rwts: total=17534,17527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:04.960 00:14:04.961 Run status group 0 (all jobs): 00:14:04.961 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:14:04.961 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:04.961 16:17:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:04.961 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:04.961 fio-3.35 00:14:04.961 Starting 1 thread 00:14:07.495 00:14:07.495 test: (groupid=0, jobs=1): err= 0: pid=74953: Fri Jul 12 16:17:50 2024 00:14:07.495 read: IOPS=8022, BW=125MiB/s (131MB/s)(251MiB/2003msec) 00:14:07.495 slat (usec): min=2, max=154, avg= 4.12, stdev= 2.70 00:14:07.495 clat (usec): min=2414, max=17228, avg=8694.89, stdev=2646.37 00:14:07.495 lat (usec): min=2418, max=17231, avg=8699.02, stdev=2646.49 00:14:07.495 clat percentiles (usec): 00:14:07.495 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6259], 00:14:07.495 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 9110], 00:14:07.495 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12256], 95.00th=[13566], 00:14:07.495 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16581], 99.95th=[16909], 00:14:07.495 | 99.99th=[17171] 00:14:07.495 bw ( KiB/s): min=59296, max=75264, per=51.38%, avg=65960.00, stdev=7205.68, samples=4 00:14:07.495 iops : min= 3706, max= 4704, avg=4122.50, stdev=450.36, samples=4 00:14:07.495 write: IOPS=4745, BW=74.1MiB/s (77.7MB/s)(135MiB/1821msec); 0 zone resets 00:14:07.495 slat (usec): min=32, max=358, avg=40.63, stdev=10.05 00:14:07.495 clat (usec): min=3060, max=19739, avg=12620.24, stdev=2148.55 00:14:07.495 lat (usec): min=3100, max=19775, avg=12660.87, stdev=2150.37 00:14:07.495 clat percentiles (usec): 00:14:07.495 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:14:07.495 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12387], 60.00th=[12911], 00:14:07.495 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15533], 95.00th=[16581], 00:14:07.495 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:14:07.495 | 99.99th=[19792] 00:14:07.495 bw ( KiB/s): min=62048, max=79104, per=90.80%, avg=68936.00, stdev=7656.83, samples=4 00:14:07.495 iops : min= 3878, max= 4944, avg=4308.50, stdev=478.55, samples=4 00:14:07.495 lat (msec) : 4=0.36%, 10=49.00%, 20=50.63% 00:14:07.495 cpu : usr=79.38%, sys=15.28%, ctx=25, majf=0, minf=14 00:14:07.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:07.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:07.495 issued rwts: total=16070,8641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:07.495 00:14:07.495 Run status group 0 (all jobs): 00:14:07.495 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2003-2003msec 00:14:07.495 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=135MiB (142MB), run=1821-1821msec 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.495 16:17:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:07.495 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.495 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:07.495 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.495 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.495 rmmod nvme_tcp 00:14:07.495 rmmod nvme_fabrics 00:14:07.495 rmmod nvme_keyring 00:14:07.495 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74833 ']' 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74833 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 74833 ']' 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 74833 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74833 00:14:07.496 killing process with pid 74833 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74833' 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 74833 00:14:07.496 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 74833 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:07.755 00:14:07.755 real 0m7.975s 00:14:07.755 user 0m32.991s 00:14:07.755 sys 0m2.169s 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.755 16:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:07.755 ************************************ 00:14:07.755 END TEST nvmf_fio_host 00:14:07.755 ************************************ 00:14:07.755 16:17:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.755 16:17:51 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:07.755 16:17:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.755 16:17:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.755 16:17:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.755 ************************************ 00:14:07.755 START TEST nvmf_failover 00:14:07.755 ************************************ 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:07.755 * Looking for test storage... 00:14:07.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.755 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.756 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:08.015 Cannot find device "nvmf_tgt_br" 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.015 Cannot find device "nvmf_tgt_br2" 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:08.015 Cannot find device "nvmf_tgt_br" 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:08.015 Cannot find device "nvmf_tgt_br2" 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:08.015 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:08.275 00:14:08.275 --- 10.0.0.2 ping statistics --- 00:14:08.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.275 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:08.275 00:14:08.275 --- 10.0.0.3 ping statistics --- 00:14:08.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.275 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:08.275 00:14:08.275 --- 10.0.0.1 ping statistics --- 00:14:08.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.275 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:08.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75162 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75162 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75162 ']' 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.275 16:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:08.275 [2024-07-12 16:17:51.895308] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:08.275 [2024-07-12 16:17:51.895390] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.534 [2024-07-12 16:17:52.038582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.534 [2024-07-12 16:17:52.107361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.534 [2024-07-12 16:17:52.107798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.534 [2024-07-12 16:17:52.107919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.534 [2024-07-12 16:17:52.108157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.534 [2024-07-12 16:17:52.108303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.534 [2024-07-12 16:17:52.108650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.534 [2024-07-12 16:17:52.108664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.534 [2024-07-12 16:17:52.108711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.534 [2024-07-12 16:17:52.142297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.470 16:17:52 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:09.728 [2024-07-12 16:17:53.208195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.728 16:17:53 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:09.987 Malloc0 00:14:09.987 16:17:53 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.246 16:17:53 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.504 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.763 [2024-07-12 16:17:54.254711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.763 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:11.021 [2024-07-12 16:17:54.530926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:11.021 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:11.287 [2024-07-12 16:17:54.767230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75221 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75221 /var/tmp/bdevperf.sock 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75221 ']' 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.287 16:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:11.547 16:17:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.547 16:17:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:11.547 16:17:55 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:11.817 NVMe0n1 00:14:11.817 16:17:55 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:12.075 00:14:12.075 16:17:55 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75237 00:14:12.075 16:17:55 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:12.075 16:17:55 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:13.011 16:17:56 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.271 [2024-07-12 16:17:56.971880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6180 is same with the state(5) to be set 00:14:13.271 [2024-07-12 16:17:56.971971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6180 is same with the state(5) to be set 00:14:13.271 [2024-07-12 16:17:56.971986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6180 is same with the state(5) to be set 00:14:13.271 16:17:56 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:16.557 16:17:59 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:16.816 00:14:16.816 16:18:00 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:17.074 [2024-07-12 16:18:00.648453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6dc0 is same with the state(5) to be set 00:14:17.074 [2024-07-12 16:18:00.648507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6dc0 is same with the state(5) to be set 00:14:17.074 [2024-07-12 16:18:00.648519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6dc0 is same with the state(5) to be set 00:14:17.074 16:18:00 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:20.354 16:18:03 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.354 [2024-07-12 16:18:03.920466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.354 16:18:03 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:21.329 16:18:04 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:21.587 [2024-07-12 16:18:05.249996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224ccb0 is same with the state(5) to be set 00:14:21.587 [2024-07-12 16:18:05.250055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224ccb0 is same with the state(5) to be set 00:14:21.587 [2024-07-12 16:18:05.250067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224ccb0 is same with the state(5) to be set 00:14:21.587 [2024-07-12 16:18:05.250076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224ccb0 is same with the state(5) to be set 00:14:21.587 [2024-07-12 16:18:05.250084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224ccb0 is same with the state(5) to be set 00:14:21.587 16:18:05 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75237 00:14:28.154 0 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75221 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75221 ']' 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75221 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75221 00:14:28.154 killing process with pid 75221 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75221' 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75221 00:14:28.154 16:18:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75221 00:14:28.154 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:28.154 [2024-07-12 16:17:54.828762] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:28.154 [2024-07-12 16:17:54.828917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75221 ] 00:14:28.154 [2024-07-12 16:17:54.971637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.154 [2024-07-12 16:17:55.031209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.154 [2024-07-12 16:17:55.061995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:28.154 Running I/O for 15 seconds... 00:14:28.154 [2024-07-12 16:17:56.972076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.154 [2024-07-12 16:17:56.972157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.154 [2024-07-12 16:17:56.972228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.154 [2024-07-12 16:17:56.972313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.154 [2024-07-12 16:17:56.972357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01370 is same with the state(5) to be set 00:14:28.154 [2024-07-12 16:17:56.972484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.972952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.972978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.973030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.973082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.973133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.973194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.973245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.154 [2024-07-12 16:17:56.973294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.154 [2024-07-12 16:17:56.973318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.973370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.973808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.973858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.973930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.973957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.973983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.974684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.974734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.974800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.974921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.974989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.975042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.975093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.155 [2024-07-12 16:17:56.975144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.155 [2024-07-12 16:17:56.975640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.155 [2024-07-12 16:17:56.975668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.975691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.975743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.975781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.975808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.975835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.975860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.975899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.975987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.976542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.976969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.976995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.156 [2024-07-12 16:17:56.977457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.977961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.978014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.978040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.978070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.978095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.156 [2024-07-12 16:17:56.978135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.156 [2024-07-12 16:17:56.978159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:17:56.978844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.978893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.978961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.978992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:17:56.979670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.157 [2024-07-12 16:17:56.979769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.157 [2024-07-12 16:17:56.979804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75496 len:8 PRP1 0x0 PRP2 0x0 00:14:28.157 [2024-07-12 16:17:56.979827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:17:56.979908] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4ffa0 was disconnected and freed. reset controller. 00:14:28.157 [2024-07-12 16:17:56.979943] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:28.157 [2024-07-12 16:17:56.979972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:28.157 [2024-07-12 16:17:56.984906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:28.157 [2024-07-12 16:17:56.984962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc01370 (9): Bad file descriptor 00:14:28.157 [2024-07-12 16:17:57.026803] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:28.157 [2024-07-12 16:18:00.649039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:18:00.649103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:18:00.649207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:18:00.649273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.157 [2024-07-12 16:18:00.649321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.157 [2024-07-12 16:18:00.649671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.157 [2024-07-12 16:18:00.649695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.649718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.649741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.649764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.649805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.649828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.649894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.649935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.649961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.650565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.650956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.650981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.158 [2024-07-12 16:18:00.651471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.651519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.158 [2024-07-12 16:18:00.651544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.158 [2024-07-12 16:18:00.651567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.651614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.651662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.651709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.651757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.651874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.651946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.651968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.651991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.652788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.652837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.652900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.652953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.652979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.653002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.653050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.653099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.653149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.159 [2024-07-12 16:18:00.653200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.159 [2024-07-12 16:18:00.653863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.159 [2024-07-12 16:18:00.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.653929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.160 [2024-07-12 16:18:00.653974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.160 [2024-07-12 16:18:00.654028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.160 [2024-07-12 16:18:00.654080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.654956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.654981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.655032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.655079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.655176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.655224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.160 [2024-07-12 16:18:00.655284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4ba30 is same with the state(5) to be set 00:14:28.160 [2024-07-12 16:18:00.655338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89112 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89120 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89128 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89136 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89144 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.655907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89152 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.655931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.655954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.655985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.656004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89160 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.656027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.656050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.656068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.656085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89168 len:8 PRP1 0x0 PRP2 0x0 00:14:28.160 [2024-07-12 16:18:00.656106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.160 [2024-07-12 16:18:00.656130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.160 [2024-07-12 16:18:00.656147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.160 [2024-07-12 16:18:00.656182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89176 len:8 PRP1 0x0 PRP2 0x0 00:14:28.161 [2024-07-12 16:18:00.656204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.161 [2024-07-12 16:18:00.656246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.161 [2024-07-12 16:18:00.656264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89184 len:8 PRP1 0x0 PRP2 0x0 00:14:28.161 [2024-07-12 16:18:00.656300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.161 [2024-07-12 16:18:00.656343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.161 [2024-07-12 16:18:00.656362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89192 len:8 PRP1 0x0 PRP2 0x0 00:14:28.161 [2024-07-12 16:18:00.656385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.161 [2024-07-12 16:18:00.656437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.161 [2024-07-12 16:18:00.656455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89200 len:8 PRP1 0x0 PRP2 0x0 00:14:28.161 [2024-07-12 16:18:00.656483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656547] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4ba30 was disconnected and freed. reset controller. 00:14:28.161 [2024-07-12 16:18:00.656579] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:28.161 [2024-07-12 16:18:00.656676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.161 [2024-07-12 16:18:00.656708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.161 [2024-07-12 16:18:00.656757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.161 [2024-07-12 16:18:00.656817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.161 [2024-07-12 16:18:00.656864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:00.656902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:28.161 [2024-07-12 16:18:00.656971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc01370 (9): Bad file descriptor 00:14:28.161 [2024-07-12 16:18:00.661651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:28.161 [2024-07-12 16:18:00.700697] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:28.161 [2024-07-12 16:18:05.250807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.250885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.250932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.250962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.250992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.161 [2024-07-12 16:18:05.251633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.251955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.251980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.161 [2024-07-12 16:18:05.252548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.161 [2024-07-12 16:18:05.252577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.252969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.252994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.162 [2024-07-12 16:18:05.253419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.253955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.253981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.162 [2024-07-12 16:18:05.254570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.162 [2024-07-12 16:18:05.254599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.254622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.254652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.254676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.254703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.254727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.254755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.254780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.254807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.254840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.254883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.254926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.254954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.254982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.255626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.255686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.255739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.255791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.255844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.255914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.255939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.255964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.256029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.256089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.256144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.256197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.256249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.163 [2024-07-12 16:18:05.256318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.163 [2024-07-12 16:18:05.256709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84320 is same with the state(5) to be set 00:14:28.163 [2024-07-12 16:18:05.256766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.163 [2024-07-12 16:18:05.256784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.163 [2024-07-12 16:18:05.256805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:8 PRP1 0x0 PRP2 0x0 00:14:28.163 [2024-07-12 16:18:05.256828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.163 [2024-07-12 16:18:05.256884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.163 [2024-07-12 16:18:05.256906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20040 len:8 PRP1 0x0 PRP2 0x0 00:14:28.163 [2024-07-12 16:18:05.256928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.163 [2024-07-12 16:18:05.256954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.163 [2024-07-12 16:18:05.256973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.163 [2024-07-12 16:18:05.256991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20048 len:8 PRP1 0x0 PRP2 0x0 00:14:28.163 [2024-07-12 16:18:05.257013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20056 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20072 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20080 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20088 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20104 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20112 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20120 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.257934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.257953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.257970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20136 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.257992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20144 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20152 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20168 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20176 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20184 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20200 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:28.164 [2024-07-12 16:18:05.258720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:28.164 [2024-07-12 16:18:05.258745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20208 len:8 PRP1 0x0 PRP2 0x0 00:14:28.164 [2024-07-12 16:18:05.258769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.258839] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc84320 was disconnected and freed. reset controller. 00:14:28.164 [2024-07-12 16:18:05.258885] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:28.164 [2024-07-12 16:18:05.258975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.164 [2024-07-12 16:18:05.259010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.259036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.164 [2024-07-12 16:18:05.259061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.259086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.164 [2024-07-12 16:18:05.259110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.259135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.164 [2024-07-12 16:18:05.259159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.164 [2024-07-12 16:18:05.259183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:28.164 [2024-07-12 16:18:05.259276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc01370 (9): Bad file descriptor 00:14:28.164 [2024-07-12 16:18:05.263963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:28.164 [2024-07-12 16:18:05.299707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:28.164 00:14:28.164 Latency(us) 00:14:28.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.164 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:28.164 Verification LBA range: start 0x0 length 0x4000 00:14:28.164 NVMe0n1 : 15.01 8762.65 34.23 219.91 0.00 14215.28 659.08 17158.52 00:14:28.165 =================================================================================================================== 00:14:28.165 Total : 8762.65 34.23 219.91 0.00 14215.28 659.08 17158.52 00:14:28.165 Received shutdown signal, test time was about 15.000000 seconds 00:14:28.165 00:14:28.165 Latency(us) 00:14:28.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.165 =================================================================================================================== 00:14:28.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:28.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75410 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75410 /var/tmp/bdevperf.sock 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75410 ']' 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:28.165 [2024-07-12 16:18:11.605566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:28.165 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:28.429 [2024-07-12 16:18:11.885882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:28.429 16:18:11 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:28.687 NVMe0n1 00:14:28.687 16:18:12 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:28.945 00:14:28.945 16:18:12 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:29.204 00:14:29.204 16:18:12 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:29.204 16:18:12 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:29.462 16:18:13 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:29.734 16:18:13 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:33.040 16:18:16 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:33.040 16:18:16 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:33.040 16:18:16 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.040 16:18:16 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75479 00:14:33.040 16:18:16 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75479 00:14:34.414 0 00:14:34.414 16:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:34.414 [2024-07-12 16:18:11.112695] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:34.414 [2024-07-12 16:18:11.112893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75410 ] 00:14:34.414 [2024-07-12 16:18:11.248730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.414 [2024-07-12 16:18:11.308110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.414 [2024-07-12 16:18:11.337394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.414 [2024-07-12 16:18:13.383990] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:34.414 [2024-07-12 16:18:13.384543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.414 [2024-07-12 16:18:13.384658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.414 [2024-07-12 16:18:13.384752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.414 [2024-07-12 16:18:13.384831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.414 [2024-07-12 16:18:13.384935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.414 [2024-07-12 16:18:13.385016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.414 [2024-07-12 16:18:13.385094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.414 [2024-07-12 16:18:13.385174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.414 [2024-07-12 16:18:13.385197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:34.414 [2024-07-12 16:18:13.385255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:34.414 [2024-07-12 16:18:13.385289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d5370 (9): Bad file descriptor 00:14:34.414 [2024-07-12 16:18:13.392803] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:34.414 Running I/O for 1 seconds... 00:14:34.414 00:14:34.414 Latency(us) 00:14:34.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.414 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:34.414 Verification LBA range: start 0x0 length 0x4000 00:14:34.414 NVMe0n1 : 1.01 7933.02 30.99 0.00 0.00 16037.07 1340.51 15609.48 00:14:34.414 =================================================================================================================== 00:14:34.414 Total : 7933.02 30.99 0.00 0.00 16037.07 1340.51 15609.48 00:14:34.414 16:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:34.414 16:18:17 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:34.414 16:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:34.981 16:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:34.981 16:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:35.239 16:18:18 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:35.496 16:18:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75410 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75410 ']' 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75410 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:38.775 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:38.776 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75410 00:14:38.776 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:38.776 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:38.776 killing process with pid 75410 00:14:38.776 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75410' 00:14:38.776 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75410 00:14:38.776 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75410 00:14:39.034 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:39.034 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.292 rmmod nvme_tcp 00:14:39.292 rmmod nvme_fabrics 00:14:39.292 rmmod nvme_keyring 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75162 ']' 00:14:39.292 16:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75162 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75162 ']' 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75162 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75162 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75162' 00:14:39.293 killing process with pid 75162 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75162 00:14:39.293 16:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75162 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:39.551 00:14:39.551 real 0m31.802s 00:14:39.551 user 2m3.350s 00:14:39.551 sys 0m5.448s 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.551 16:18:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:39.551 ************************************ 00:14:39.551 END TEST nvmf_failover 00:14:39.551 ************************************ 00:14:39.551 16:18:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:39.551 16:18:23 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:39.551 16:18:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:39.551 16:18:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.551 16:18:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.551 ************************************ 00:14:39.551 START TEST nvmf_host_discovery 00:14:39.551 ************************************ 00:14:39.551 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:39.809 * Looking for test storage... 00:14:39.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:39.809 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.809 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:39.809 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.809 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:39.810 Cannot find device "nvmf_tgt_br" 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.810 Cannot find device "nvmf_tgt_br2" 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:39.810 Cannot find device "nvmf_tgt_br" 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:39.810 Cannot find device "nvmf_tgt_br2" 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.810 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:40.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:40.068 00:14:40.068 --- 10.0.0.2 ping statistics --- 00:14:40.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.068 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:40.068 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.068 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:40.068 00:14:40.068 --- 10.0.0.3 ping statistics --- 00:14:40.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.068 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:40.068 00:14:40.068 --- 10.0.0.1 ping statistics --- 00:14:40.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.068 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75748 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75748 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 75748 ']' 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.068 16:18:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 [2024-07-12 16:18:23.701651] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:40.068 [2024-07-12 16:18:23.701759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.326 [2024-07-12 16:18:23.837166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.326 [2024-07-12 16:18:23.911043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.326 [2024-07-12 16:18:23.911108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.326 [2024-07-12 16:18:23.911120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.326 [2024-07-12 16:18:23.911129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.326 [2024-07-12 16:18:23.911136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.326 [2024-07-12 16:18:23.911169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.326 [2024-07-12 16:18:23.940341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 [2024-07-12 16:18:24.758510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 [2024-07-12 16:18:24.766603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 null0 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 null1 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75780 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75780 /tmp/host.sock 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 75780 ']' 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.275 16:18:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.275 [2024-07-12 16:18:24.874381] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:41.275 [2024-07-12 16:18:24.874805] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75780 ] 00:14:41.532 [2024-07-12 16:18:25.014317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.532 [2024-07-12 16:18:25.081885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.532 [2024-07-12 16:18:25.112732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.098 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.098 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:42.098 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.098 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:42.098 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.098 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:42.356 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:42.357 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.614 [2024-07-12 16:18:26.179000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.614 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.872 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:14:42.873 16:18:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:14:43.131 [2024-07-12 16:18:26.841899] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:43.131 [2024-07-12 16:18:26.841942] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:43.131 [2024-07-12 16:18:26.841962] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:43.131 [2024-07-12 16:18:26.847953] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:43.389 [2024-07-12 16:18:26.905085] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:43.389 [2024-07-12 16:18:26.905141] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:43.951 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.952 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.209 [2024-07-12 16:18:27.780460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:44.209 [2024-07-12 16:18:27.781582] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:44.209 [2024-07-12 16:18:27.781624] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.209 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:44.210 [2024-07-12 16:18:27.787566] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.210 [2024-07-12 16:18:27.846072] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:44.210 [2024-07-12 16:18:27.846117] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:44.210 [2024-07-12 16:18:27.846125] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.210 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:44.467 16:18:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.467 [2024-07-12 16:18:28.025658] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:44.467 [2024-07-12 16:18:28.025704] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:44.467 [2024-07-12 16:18:28.031664] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:14:44.467 [2024-07-12 16:18:28.031697] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:44.467 [2024-07-12 16:18:28.031814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.467 [2024-07-12 16:18:28.031851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.467 [2024-07-12 16:18:28.031895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.467 [2024-07-12 16:18:28.031914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.467 [2024-07-12 16:18:28.031925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.467 [2024-07-12 16:18:28.031934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.467 [2024-07-12 16:18:28.031945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.467 [2024-07-12 16:18:28.031954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.467 [2024-07-12 16:18:28.031963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefc500 is same with the state(5) to be set 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:44.467 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.468 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.726 16:18:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.098 [2024-07-12 16:18:29.412418] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:46.098 [2024-07-12 16:18:29.412462] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:46.098 [2024-07-12 16:18:29.412484] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:46.098 [2024-07-12 16:18:29.418456] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:14:46.098 [2024-07-12 16:18:29.479045] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:46.098 [2024-07-12 16:18:29.479109] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:46.098 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.098 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:46.098 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 request: 00:14:46.099 { 00:14:46.099 "name": "nvme", 00:14:46.099 "trtype": "tcp", 00:14:46.099 "traddr": "10.0.0.2", 00:14:46.099 "adrfam": "ipv4", 00:14:46.099 "trsvcid": "8009", 00:14:46.099 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:46.099 "wait_for_attach": true, 00:14:46.099 "method": "bdev_nvme_start_discovery", 00:14:46.099 "req_id": 1 00:14:46.099 } 00:14:46.099 Got JSON-RPC error response 00:14:46.099 response: 00:14:46.099 { 00:14:46.099 "code": -17, 00:14:46.099 "message": "File exists" 00:14:46.099 } 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 request: 00:14:46.099 { 00:14:46.099 "name": "nvme_second", 00:14:46.099 "trtype": "tcp", 00:14:46.099 "traddr": "10.0.0.2", 00:14:46.099 "adrfam": "ipv4", 00:14:46.099 "trsvcid": "8009", 00:14:46.099 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:46.099 "wait_for_attach": true, 00:14:46.099 "method": "bdev_nvme_start_discovery", 00:14:46.099 "req_id": 1 00:14:46.099 } 00:14:46.099 Got JSON-RPC error response 00:14:46.099 response: 00:14:46.099 { 00:14:46.099 "code": -17, 00:14:46.099 "message": "File exists" 00:14:46.099 } 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.099 16:18:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.471 [2024-07-12 16:18:30.771957] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:47.471 [2024-07-12 16:18:30.772267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf16230 with addr=10.0.0.2, port=8010 00:14:47.471 [2024-07-12 16:18:30.772301] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:47.471 [2024-07-12 16:18:30.772326] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:47.471 [2024-07-12 16:18:30.772339] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:48.402 [2024-07-12 16:18:31.771954] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:48.402 [2024-07-12 16:18:31.772029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf170a0 with addr=10.0.0.2, port=8010 00:14:48.402 [2024-07-12 16:18:31.772052] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:48.403 [2024-07-12 16:18:31.772063] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:48.403 [2024-07-12 16:18:31.772073] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:49.334 [2024-07-12 16:18:32.771771] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:14:49.334 request: 00:14:49.334 { 00:14:49.334 "name": "nvme_second", 00:14:49.334 "trtype": "tcp", 00:14:49.334 "traddr": "10.0.0.2", 00:14:49.334 "adrfam": "ipv4", 00:14:49.334 "trsvcid": "8010", 00:14:49.334 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:49.334 "wait_for_attach": false, 00:14:49.334 "attach_timeout_ms": 3000, 00:14:49.334 "method": "bdev_nvme_start_discovery", 00:14:49.334 "req_id": 1 00:14:49.334 } 00:14:49.334 Got JSON-RPC error response 00:14:49.334 response: 00:14:49.334 { 00:14:49.334 "code": -110, 00:14:49.334 "message": "Connection timed out" 00:14:49.334 } 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75780 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.334 rmmod nvme_tcp 00:14:49.334 rmmod nvme_fabrics 00:14:49.334 rmmod nvme_keyring 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75748 ']' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75748 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 75748 ']' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 75748 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75748 00:14:49.334 killing process with pid 75748 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75748' 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 75748 00:14:49.334 16:18:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 75748 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:49.591 00:14:49.591 real 0m9.936s 00:14:49.591 user 0m19.292s 00:14:49.591 sys 0m1.848s 00:14:49.591 ************************************ 00:14:49.591 END TEST nvmf_host_discovery 00:14:49.591 ************************************ 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:49.591 16:18:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:49.591 16:18:33 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:49.591 16:18:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.591 16:18:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.591 16:18:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.591 ************************************ 00:14:49.591 START TEST nvmf_host_multipath_status 00:14:49.591 ************************************ 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:49.591 * Looking for test storage... 00:14:49.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:49.591 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:49.848 Cannot find device "nvmf_tgt_br" 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.848 Cannot find device "nvmf_tgt_br2" 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:49.848 Cannot find device "nvmf_tgt_br" 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:49.848 Cannot find device "nvmf_tgt_br2" 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:49.848 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:50.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:50.106 00:14:50.106 --- 10.0.0.2 ping statistics --- 00:14:50.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.106 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:50.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:50.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:14:50.106 00:14:50.106 --- 10.0.0.3 ping statistics --- 00:14:50.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.106 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:50.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:50.106 00:14:50.106 --- 10.0.0.1 ping statistics --- 00:14:50.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.106 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:50.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76231 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76231 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76231 ']' 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.106 16:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:50.106 [2024-07-12 16:18:33.716029] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:14:50.106 [2024-07-12 16:18:33.716928] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.364 [2024-07-12 16:18:33.853076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.364 [2024-07-12 16:18:33.913538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.364 [2024-07-12 16:18:33.914023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.364 [2024-07-12 16:18:33.914263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.364 [2024-07-12 16:18:33.914631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.364 [2024-07-12 16:18:33.914915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.364 [2024-07-12 16:18:33.915214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.364 [2024-07-12 16:18:33.915225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.364 [2024-07-12 16:18:33.945636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.928 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.928 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:14:50.928 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.928 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.928 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:51.185 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.185 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76231 00:14:51.185 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.479 [2024-07-12 16:18:34.964861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.479 16:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:51.737 Malloc0 00:14:51.737 16:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:51.994 16:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.252 16:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.510 [2024-07-12 16:18:36.152481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.510 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:52.768 [2024-07-12 16:18:36.440671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76288 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76288 /var/tmp/bdevperf.sock 00:14:52.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76288 ']' 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.768 16:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:54.140 16:18:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.140 16:18:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:14:54.140 16:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:54.140 16:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:14:54.398 Nvme0n1 00:14:54.398 16:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:54.964 Nvme0n1 00:14:54.964 16:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:54.964 16:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:56.862 16:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:56.862 16:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:57.120 16:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:57.378 16:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:14:58.312 16:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:14:58.312 16:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:58.312 16:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:58.312 16:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:58.569 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:58.569 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:58.569 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:58.569 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:59.135 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:59.135 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:59.135 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.135 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:59.392 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.392 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:59.392 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.392 16:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:59.649 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.649 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:59.650 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.650 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:59.907 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.907 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:59.907 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.907 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:00.259 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.260 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:00.260 16:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:00.536 16:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:00.793 16:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:01.726 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:01.727 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:01.727 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.727 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:01.985 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:01.985 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:01.985 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.985 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:02.242 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.242 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:02.242 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.242 16:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:02.500 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.500 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:02.500 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.500 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:02.759 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.759 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:02.759 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.759 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:03.017 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.017 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:03.017 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.017 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:03.275 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.275 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:03.275 16:18:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:03.533 16:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:04.099 16:18:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:05.035 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:05.035 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:05.035 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.035 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:05.294 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.294 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:05.294 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:05.294 16:18:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.552 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:05.552 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:05.552 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.553 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:05.812 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.812 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:05.812 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.812 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:06.070 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.070 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:06.070 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.070 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:06.328 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.328 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:06.328 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.328 16:18:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:06.586 16:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.586 16:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:06.586 16:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:06.843 16:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:07.101 16:18:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:08.034 16:18:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:08.034 16:18:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:08.034 16:18:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.034 16:18:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.600 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:09.167 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.167 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:09.167 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.167 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:09.425 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.425 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:09.425 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.425 16:18:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:09.683 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.683 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:09.683 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.683 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:09.942 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:09.942 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:09.942 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:10.200 16:18:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:10.458 16:18:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:11.833 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.092 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:12.092 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:12.092 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.092 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:12.350 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.350 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:12.350 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:12.350 16:18:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.608 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.608 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:12.608 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.608 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:12.867 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:12.867 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:12.867 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.867 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:13.434 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:13.434 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:13.434 16:18:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:13.434 16:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:13.692 16:18:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.067 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:15.326 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.326 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:15.326 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:15.326 16:18:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.586 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.586 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:15.586 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.586 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:15.862 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.862 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:15.862 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.862 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:16.129 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:16.129 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:16.129 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.129 16:18:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:16.387 16:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.387 16:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:16.952 16:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:16.952 16:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:16.952 16:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:17.210 16:19:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:18.585 16:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:18.585 16:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:18.585 16:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.585 16:19:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:18.585 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.585 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:18.585 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.585 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:18.843 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.843 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:18.843 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.843 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:19.101 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.101 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:19.101 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:19.101 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.358 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.358 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:19.358 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.358 16:19:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:19.616 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.616 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:19.616 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.616 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:19.874 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.874 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:19.874 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:20.133 16:19:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:20.391 16:19:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:21.328 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:21.328 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:21.328 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.328 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:21.587 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:21.587 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:21.587 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:21.587 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.846 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.846 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:21.846 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.846 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:22.414 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.414 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:22.414 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:22.414 16:19:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.414 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.414 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:22.414 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.414 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:22.672 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.672 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:22.672 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:22.672 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.238 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.238 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:23.238 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:23.238 16:19:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:23.497 16:19:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:24.433 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:24.433 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:24.433 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.433 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.000 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:25.258 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.258 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:25.517 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.517 16:19:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.775 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:26.342 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.342 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:26.342 16:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:26.342 16:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:26.600 16:19:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.975 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:28.234 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.234 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:28.234 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:28.234 16:19:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.500 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.500 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:28.500 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.500 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:28.767 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.767 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:28.767 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:28.767 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.026 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.026 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:29.026 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.026 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76288 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76288 ']' 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76288 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76288 00:15:29.284 killing process with pid 76288 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76288' 00:15:29.284 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76288 00:15:29.285 16:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76288 00:15:29.546 Connection closed with partial response: 00:15:29.546 00:15:29.546 00:15:29.546 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76288 00:15:29.546 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:29.546 [2024-07-12 16:18:36.527946] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:15:29.546 [2024-07-12 16:18:36.528115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76288 ] 00:15:29.546 [2024-07-12 16:18:36.668247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.546 [2024-07-12 16:18:36.755529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.546 [2024-07-12 16:18:36.791063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.546 Running I/O for 90 seconds... 00:15:29.546 [2024-07-12 16:18:53.840636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.840718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.840778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.840800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.840829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.840845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.840880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.840898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.840921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.840936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.840958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.840973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.840995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.546 [2024-07-12 16:18:53.841535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.841974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.842012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.842034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.546 [2024-07-12 16:18:53.842050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:29.546 [2024-07-12 16:18:53.842072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.842846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.842951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.842990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.547 [2024-07-12 16:18:53.843635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.843983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.843999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:29.547 [2024-07-12 16:18:53.844021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.547 [2024-07-12 16:18:53.844037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.844880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.844967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.844990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.845449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.845465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.846242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.548 [2024-07-12 16:18:53.846271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.846308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.846326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.846357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.846379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.846421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.846438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.846469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.846485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:29.548 [2024-07-12 16:18:53.846516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.548 [2024-07-12 16:18:53.846532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:18:53.846579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:18:53.846626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:18:53.846691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:18:53.846740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.846800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.846847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.846911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.846966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.846997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.847026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.847060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.847076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.847107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.847123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:18:53.847155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:18:53.847171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.255416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.255509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.255735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.255788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.255825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.255862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.255977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.255992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.549 [2024-07-12 16:19:10.256651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:29.549 [2024-07-12 16:19:10.256682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.549 [2024-07-12 16:19:10.256699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.256721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.256737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.256759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.256775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.256797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.256813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.256835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.256851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.256888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.256906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.258328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.258661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.258736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.258774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.258935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.258973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.258995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.259011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.259048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.259097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:29.550 [2024-07-12 16:19:10.259136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.259174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.259212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.259249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:29.550 [2024-07-12 16:19:10.259286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:29.550 [2024-07-12 16:19:10.259301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:29.550 Received shutdown signal, test time was about 34.460287 seconds 00:15:29.550 00:15:29.550 Latency(us) 00:15:29.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.550 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:29.550 Verification LBA range: start 0x0 length 0x4000 00:15:29.550 Nvme0n1 : 34.46 8326.05 32.52 0.00 0.00 15340.06 930.91 4026531.84 00:15:29.550 =================================================================================================================== 00:15:29.550 Total : 8326.05 32.52 0.00 0.00 15340.06 930.91 4026531.84 00:15:29.550 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.809 rmmod nvme_tcp 00:15:29.809 rmmod nvme_fabrics 00:15:29.809 rmmod nvme_keyring 00:15:29.809 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76231 ']' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76231 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76231 ']' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76231 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76231 00:15:30.068 killing process with pid 76231 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76231' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76231 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76231 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:30.068 ************************************ 00:15:30.068 END TEST nvmf_host_multipath_status 00:15:30.068 ************************************ 00:15:30.068 00:15:30.068 real 0m40.578s 00:15:30.068 user 2m11.705s 00:15:30.068 sys 0m11.926s 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.068 16:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 16:19:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.328 16:19:13 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:30.328 16:19:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:30.328 16:19:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.328 16:19:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 ************************************ 00:15:30.328 START TEST nvmf_discovery_remove_ifc 00:15:30.328 ************************************ 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:30.328 * Looking for test storage... 00:15:30.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:30.328 Cannot find device "nvmf_tgt_br" 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.328 Cannot find device "nvmf_tgt_br2" 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:30.328 16:19:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:30.328 Cannot find device "nvmf_tgt_br" 00:15:30.328 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:15:30.328 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:30.328 Cannot find device "nvmf_tgt_br2" 00:15:30.328 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:15:30.328 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:30.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:30.587 00:15:30.587 --- 10.0.0.2 ping statistics --- 00:15:30.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.587 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:30.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:30.587 00:15:30.587 --- 10.0.0.3 ping statistics --- 00:15:30.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.587 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:30.587 00:15:30.587 --- 10.0.0.1 ping statistics --- 00:15:30.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.587 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:30.587 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.588 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77084 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77084 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77084 ']' 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.846 16:19:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:30.846 [2024-07-12 16:19:14.373761] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:15:30.846 [2024-07-12 16:19:14.374496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.846 [2024-07-12 16:19:14.522518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.104 [2024-07-12 16:19:14.593455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.104 [2024-07-12 16:19:14.593516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.104 [2024-07-12 16:19:14.593532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.104 [2024-07-12 16:19:14.593542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.104 [2024-07-12 16:19:14.593551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.104 [2024-07-12 16:19:14.593579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.104 [2024-07-12 16:19:14.626824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.669 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.669 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:15:31.669 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.669 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.669 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:31.927 [2024-07-12 16:19:15.427680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.927 [2024-07-12 16:19:15.435770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:31.927 null0 00:15:31.927 [2024-07-12 16:19:15.467733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77116 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:31.927 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77116 /tmp/host.sock 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77116 ']' 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.927 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:31.927 [2024-07-12 16:19:15.539790] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:15:31.927 [2024-07-12 16:19:15.539899] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77116 ] 00:15:32.185 [2024-07-12 16:19:15.683285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.185 [2024-07-12 16:19:15.748983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 [2024-07-12 16:19:15.831485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.185 16:19:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.559 [2024-07-12 16:19:16.860778] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:33.559 [2024-07-12 16:19:16.860816] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:33.559 [2024-07-12 16:19:16.860837] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.559 [2024-07-12 16:19:16.866831] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:33.559 [2024-07-12 16:19:16.923830] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:33.559 [2024-07-12 16:19:16.923931] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:33.559 [2024-07-12 16:19:16.923963] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:33.559 [2024-07-12 16:19:16.923985] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:33.559 [2024-07-12 16:19:16.924013] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.559 [2024-07-12 16:19:16.929419] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x107dd90 was disconnected and freed. delete nvme_qpair. 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:33.559 16:19:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:33.559 16:19:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:34.493 16:19:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:35.425 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:35.682 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.682 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:35.682 16:19:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:36.617 16:19:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:37.551 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:37.809 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.809 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:37.809 16:19:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.743 [2024-07-12 16:19:22.351946] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:38.743 [2024-07-12 16:19:22.352013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.743 [2024-07-12 16:19:22.352031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.743 [2024-07-12 16:19:22.352046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.743 [2024-07-12 16:19:22.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.743 [2024-07-12 16:19:22.352066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.743 [2024-07-12 16:19:22.352076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.743 [2024-07-12 16:19:22.352086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.743 [2024-07-12 16:19:22.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.743 [2024-07-12 16:19:22.352106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.743 [2024-07-12 16:19:22.352116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.743 [2024-07-12 16:19:22.352126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3970 is same with the state(5) to be set 00:15:38.743 [2024-07-12 16:19:22.361939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3970 (9): Bad file descriptor 00:15:38.743 [2024-07-12 16:19:22.371973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:38.743 16:19:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:39.679 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 [2024-07-12 16:19:23.426001] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:39.944 [2024-07-12 16:19:23.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe3970 with addr=10.0.0.2, port=4420 00:15:39.944 [2024-07-12 16:19:23.426727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3970 is same with the state(5) to be set 00:15:39.944 [2024-07-12 16:19:23.427268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3970 (9): Bad file descriptor 00:15:39.944 [2024-07-12 16:19:23.428557] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:39.944 [2024-07-12 16:19:23.428919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:39.944 [2024-07-12 16:19:23.429299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:39.944 [2024-07-12 16:19:23.429556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:39.944 [2024-07-12 16:19:23.429821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:39.944 [2024-07-12 16:19:23.430083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:39.944 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.944 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:39.944 16:19:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:40.887 [2024-07-12 16:19:24.430175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:40.887 [2024-07-12 16:19:24.430234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:40.887 [2024-07-12 16:19:24.430248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:40.887 [2024-07-12 16:19:24.430258] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:40.887 [2024-07-12 16:19:24.430294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:40.887 [2024-07-12 16:19:24.430326] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:40.887 [2024-07-12 16:19:24.430378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.887 [2024-07-12 16:19:24.430396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.887 [2024-07-12 16:19:24.430410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.887 [2024-07-12 16:19:24.430421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.887 [2024-07-12 16:19:24.430431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.887 [2024-07-12 16:19:24.430440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.887 [2024-07-12 16:19:24.430450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.887 [2024-07-12 16:19:24.430460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.887 [2024-07-12 16:19:24.430470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.887 [2024-07-12 16:19:24.430480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.887 [2024-07-12 16:19:24.430489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:40.887 [2024-07-12 16:19:24.431178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe7710 (9): Bad file descriptor 00:15:40.887 [2024-07-12 16:19:24.432206] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:40.887 [2024-07-12 16:19:24.432230] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:40.887 16:19:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:42.262 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:42.262 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.262 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.262 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:42.262 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:42.263 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:42.263 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:42.263 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.263 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:42.263 16:19:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:42.828 [2024-07-12 16:19:26.435470] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:42.828 [2024-07-12 16:19:26.435509] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:42.828 [2024-07-12 16:19:26.435529] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:42.828 [2024-07-12 16:19:26.441511] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:42.828 [2024-07-12 16:19:26.497672] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:42.828 [2024-07-12 16:19:26.497966] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:42.828 [2024-07-12 16:19:26.498038] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:42.828 [2024-07-12 16:19:26.498148] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:42.828 [2024-07-12 16:19:26.498216] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:42.828 [2024-07-12 16:19:26.504169] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x108d890 was disconnected and freed. delete nvme_qpair. 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77116 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77116 ']' 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77116 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77116 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.087 killing process with pid 77116 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77116' 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77116 00:15:43.087 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77116 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.346 16:19:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.346 rmmod nvme_tcp 00:15:43.346 rmmod nvme_fabrics 00:15:43.346 rmmod nvme_keyring 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77084 ']' 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77084 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77084 ']' 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77084 00:15:43.346 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77084 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:43.347 killing process with pid 77084 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77084' 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77084 00:15:43.347 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77084 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.606 00:15:43.606 real 0m13.406s 00:15:43.606 user 0m23.075s 00:15:43.606 sys 0m2.216s 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.606 16:19:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:43.606 ************************************ 00:15:43.606 END TEST nvmf_discovery_remove_ifc 00:15:43.606 ************************************ 00:15:43.606 16:19:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:43.606 16:19:27 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:43.606 16:19:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.606 16:19:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.606 16:19:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.606 ************************************ 00:15:43.606 START TEST nvmf_identify_kernel_target 00:15:43.606 ************************************ 00:15:43.606 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:43.865 * Looking for test storage... 00:15:43.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.865 Cannot find device "nvmf_tgt_br" 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.865 Cannot find device "nvmf_tgt_br2" 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.865 Cannot find device "nvmf_tgt_br" 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.865 Cannot find device "nvmf_tgt_br2" 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.865 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.124 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:44.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:44.125 00:15:44.125 --- 10.0.0.2 ping statistics --- 00:15:44.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.125 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:44.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:44.125 00:15:44.125 --- 10.0.0.3 ping statistics --- 00:15:44.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.125 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:44.125 00:15:44.125 --- 10.0.0.1 ping statistics --- 00:15:44.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.125 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:44.125 16:19:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:44.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:44.384 Waiting for block devices as requested 00:15:44.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:44.643 No valid GPT data, bailing 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:44.643 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:44.901 No valid GPT data, bailing 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:44.901 No valid GPT data, bailing 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:44.901 No valid GPT data, bailing 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:44.901 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -a 10.0.0.1 -t tcp -s 4420 00:15:44.902 00:15:44.902 Discovery Log Number of Records 2, Generation counter 2 00:15:44.902 =====Discovery Log Entry 0====== 00:15:44.902 trtype: tcp 00:15:44.902 adrfam: ipv4 00:15:44.902 subtype: current discovery subsystem 00:15:44.902 treq: not specified, sq flow control disable supported 00:15:44.902 portid: 1 00:15:44.902 trsvcid: 4420 00:15:44.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:44.902 traddr: 10.0.0.1 00:15:44.902 eflags: none 00:15:44.902 sectype: none 00:15:44.902 =====Discovery Log Entry 1====== 00:15:44.902 trtype: tcp 00:15:44.902 adrfam: ipv4 00:15:44.902 subtype: nvme subsystem 00:15:44.902 treq: not specified, sq flow control disable supported 00:15:44.902 portid: 1 00:15:44.902 trsvcid: 4420 00:15:44.902 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:44.902 traddr: 10.0.0.1 00:15:44.902 eflags: none 00:15:44.902 sectype: none 00:15:44.902 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:44.902 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:45.178 ===================================================== 00:15:45.178 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:45.178 ===================================================== 00:15:45.178 Controller Capabilities/Features 00:15:45.178 ================================ 00:15:45.178 Vendor ID: 0000 00:15:45.178 Subsystem Vendor ID: 0000 00:15:45.178 Serial Number: 02fad459ac0208a93243 00:15:45.178 Model Number: Linux 00:15:45.178 Firmware Version: 6.7.0-68 00:15:45.178 Recommended Arb Burst: 0 00:15:45.178 IEEE OUI Identifier: 00 00 00 00:15:45.178 Multi-path I/O 00:15:45.178 May have multiple subsystem ports: No 00:15:45.178 May have multiple controllers: No 00:15:45.178 Associated with SR-IOV VF: No 00:15:45.178 Max Data Transfer Size: Unlimited 00:15:45.178 Max Number of Namespaces: 0 00:15:45.178 Max Number of I/O Queues: 1024 00:15:45.178 NVMe Specification Version (VS): 1.3 00:15:45.178 NVMe Specification Version (Identify): 1.3 00:15:45.178 Maximum Queue Entries: 1024 00:15:45.178 Contiguous Queues Required: No 00:15:45.178 Arbitration Mechanisms Supported 00:15:45.178 Weighted Round Robin: Not Supported 00:15:45.178 Vendor Specific: Not Supported 00:15:45.178 Reset Timeout: 7500 ms 00:15:45.178 Doorbell Stride: 4 bytes 00:15:45.178 NVM Subsystem Reset: Not Supported 00:15:45.178 Command Sets Supported 00:15:45.178 NVM Command Set: Supported 00:15:45.178 Boot Partition: Not Supported 00:15:45.178 Memory Page Size Minimum: 4096 bytes 00:15:45.178 Memory Page Size Maximum: 4096 bytes 00:15:45.178 Persistent Memory Region: Not Supported 00:15:45.178 Optional Asynchronous Events Supported 00:15:45.178 Namespace Attribute Notices: Not Supported 00:15:45.178 Firmware Activation Notices: Not Supported 00:15:45.178 ANA Change Notices: Not Supported 00:15:45.178 PLE Aggregate Log Change Notices: Not Supported 00:15:45.178 LBA Status Info Alert Notices: Not Supported 00:15:45.178 EGE Aggregate Log Change Notices: Not Supported 00:15:45.178 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.178 Zone Descriptor Change Notices: Not Supported 00:15:45.178 Discovery Log Change Notices: Supported 00:15:45.178 Controller Attributes 00:15:45.178 128-bit Host Identifier: Not Supported 00:15:45.178 Non-Operational Permissive Mode: Not Supported 00:15:45.178 NVM Sets: Not Supported 00:15:45.178 Read Recovery Levels: Not Supported 00:15:45.178 Endurance Groups: Not Supported 00:15:45.178 Predictable Latency Mode: Not Supported 00:15:45.178 Traffic Based Keep ALive: Not Supported 00:15:45.178 Namespace Granularity: Not Supported 00:15:45.178 SQ Associations: Not Supported 00:15:45.178 UUID List: Not Supported 00:15:45.178 Multi-Domain Subsystem: Not Supported 00:15:45.178 Fixed Capacity Management: Not Supported 00:15:45.178 Variable Capacity Management: Not Supported 00:15:45.178 Delete Endurance Group: Not Supported 00:15:45.178 Delete NVM Set: Not Supported 00:15:45.178 Extended LBA Formats Supported: Not Supported 00:15:45.178 Flexible Data Placement Supported: Not Supported 00:15:45.178 00:15:45.178 Controller Memory Buffer Support 00:15:45.178 ================================ 00:15:45.178 Supported: No 00:15:45.178 00:15:45.178 Persistent Memory Region Support 00:15:45.178 ================================ 00:15:45.178 Supported: No 00:15:45.178 00:15:45.178 Admin Command Set Attributes 00:15:45.178 ============================ 00:15:45.178 Security Send/Receive: Not Supported 00:15:45.178 Format NVM: Not Supported 00:15:45.178 Firmware Activate/Download: Not Supported 00:15:45.178 Namespace Management: Not Supported 00:15:45.178 Device Self-Test: Not Supported 00:15:45.178 Directives: Not Supported 00:15:45.178 NVMe-MI: Not Supported 00:15:45.178 Virtualization Management: Not Supported 00:15:45.178 Doorbell Buffer Config: Not Supported 00:15:45.178 Get LBA Status Capability: Not Supported 00:15:45.178 Command & Feature Lockdown Capability: Not Supported 00:15:45.178 Abort Command Limit: 1 00:15:45.178 Async Event Request Limit: 1 00:15:45.178 Number of Firmware Slots: N/A 00:15:45.178 Firmware Slot 1 Read-Only: N/A 00:15:45.178 Firmware Activation Without Reset: N/A 00:15:45.178 Multiple Update Detection Support: N/A 00:15:45.178 Firmware Update Granularity: No Information Provided 00:15:45.178 Per-Namespace SMART Log: No 00:15:45.178 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.178 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:45.178 Command Effects Log Page: Not Supported 00:15:45.178 Get Log Page Extended Data: Supported 00:15:45.178 Telemetry Log Pages: Not Supported 00:15:45.178 Persistent Event Log Pages: Not Supported 00:15:45.178 Supported Log Pages Log Page: May Support 00:15:45.178 Commands Supported & Effects Log Page: Not Supported 00:15:45.178 Feature Identifiers & Effects Log Page:May Support 00:15:45.178 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.178 Data Area 4 for Telemetry Log: Not Supported 00:15:45.178 Error Log Page Entries Supported: 1 00:15:45.178 Keep Alive: Not Supported 00:15:45.178 00:15:45.178 NVM Command Set Attributes 00:15:45.178 ========================== 00:15:45.178 Submission Queue Entry Size 00:15:45.178 Max: 1 00:15:45.178 Min: 1 00:15:45.178 Completion Queue Entry Size 00:15:45.178 Max: 1 00:15:45.178 Min: 1 00:15:45.178 Number of Namespaces: 0 00:15:45.178 Compare Command: Not Supported 00:15:45.178 Write Uncorrectable Command: Not Supported 00:15:45.178 Dataset Management Command: Not Supported 00:15:45.178 Write Zeroes Command: Not Supported 00:15:45.178 Set Features Save Field: Not Supported 00:15:45.178 Reservations: Not Supported 00:15:45.178 Timestamp: Not Supported 00:15:45.178 Copy: Not Supported 00:15:45.178 Volatile Write Cache: Not Present 00:15:45.178 Atomic Write Unit (Normal): 1 00:15:45.178 Atomic Write Unit (PFail): 1 00:15:45.178 Atomic Compare & Write Unit: 1 00:15:45.178 Fused Compare & Write: Not Supported 00:15:45.178 Scatter-Gather List 00:15:45.178 SGL Command Set: Supported 00:15:45.178 SGL Keyed: Not Supported 00:15:45.178 SGL Bit Bucket Descriptor: Not Supported 00:15:45.178 SGL Metadata Pointer: Not Supported 00:15:45.178 Oversized SGL: Not Supported 00:15:45.178 SGL Metadata Address: Not Supported 00:15:45.178 SGL Offset: Supported 00:15:45.178 Transport SGL Data Block: Not Supported 00:15:45.178 Replay Protected Memory Block: Not Supported 00:15:45.178 00:15:45.178 Firmware Slot Information 00:15:45.178 ========================= 00:15:45.178 Active slot: 0 00:15:45.178 00:15:45.178 00:15:45.178 Error Log 00:15:45.178 ========= 00:15:45.178 00:15:45.178 Active Namespaces 00:15:45.178 ================= 00:15:45.178 Discovery Log Page 00:15:45.178 ================== 00:15:45.178 Generation Counter: 2 00:15:45.178 Number of Records: 2 00:15:45.178 Record Format: 0 00:15:45.178 00:15:45.178 Discovery Log Entry 0 00:15:45.178 ---------------------- 00:15:45.178 Transport Type: 3 (TCP) 00:15:45.178 Address Family: 1 (IPv4) 00:15:45.178 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:45.178 Entry Flags: 00:15:45.178 Duplicate Returned Information: 0 00:15:45.178 Explicit Persistent Connection Support for Discovery: 0 00:15:45.178 Transport Requirements: 00:15:45.178 Secure Channel: Not Specified 00:15:45.178 Port ID: 1 (0x0001) 00:15:45.178 Controller ID: 65535 (0xffff) 00:15:45.178 Admin Max SQ Size: 32 00:15:45.178 Transport Service Identifier: 4420 00:15:45.178 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:45.178 Transport Address: 10.0.0.1 00:15:45.178 Discovery Log Entry 1 00:15:45.178 ---------------------- 00:15:45.178 Transport Type: 3 (TCP) 00:15:45.178 Address Family: 1 (IPv4) 00:15:45.178 Subsystem Type: 2 (NVM Subsystem) 00:15:45.178 Entry Flags: 00:15:45.178 Duplicate Returned Information: 0 00:15:45.178 Explicit Persistent Connection Support for Discovery: 0 00:15:45.178 Transport Requirements: 00:15:45.178 Secure Channel: Not Specified 00:15:45.178 Port ID: 1 (0x0001) 00:15:45.178 Controller ID: 65535 (0xffff) 00:15:45.178 Admin Max SQ Size: 32 00:15:45.178 Transport Service Identifier: 4420 00:15:45.178 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:45.178 Transport Address: 10.0.0.1 00:15:45.178 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:45.438 get_feature(0x01) failed 00:15:45.438 get_feature(0x02) failed 00:15:45.438 get_feature(0x04) failed 00:15:45.438 ===================================================== 00:15:45.438 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:45.438 ===================================================== 00:15:45.438 Controller Capabilities/Features 00:15:45.438 ================================ 00:15:45.438 Vendor ID: 0000 00:15:45.438 Subsystem Vendor ID: 0000 00:15:45.438 Serial Number: c91a2fda5b0910075b1a 00:15:45.438 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:45.438 Firmware Version: 6.7.0-68 00:15:45.438 Recommended Arb Burst: 6 00:15:45.438 IEEE OUI Identifier: 00 00 00 00:15:45.438 Multi-path I/O 00:15:45.438 May have multiple subsystem ports: Yes 00:15:45.438 May have multiple controllers: Yes 00:15:45.438 Associated with SR-IOV VF: No 00:15:45.438 Max Data Transfer Size: Unlimited 00:15:45.438 Max Number of Namespaces: 1024 00:15:45.438 Max Number of I/O Queues: 128 00:15:45.438 NVMe Specification Version (VS): 1.3 00:15:45.438 NVMe Specification Version (Identify): 1.3 00:15:45.438 Maximum Queue Entries: 1024 00:15:45.438 Contiguous Queues Required: No 00:15:45.438 Arbitration Mechanisms Supported 00:15:45.438 Weighted Round Robin: Not Supported 00:15:45.438 Vendor Specific: Not Supported 00:15:45.438 Reset Timeout: 7500 ms 00:15:45.438 Doorbell Stride: 4 bytes 00:15:45.438 NVM Subsystem Reset: Not Supported 00:15:45.438 Command Sets Supported 00:15:45.438 NVM Command Set: Supported 00:15:45.438 Boot Partition: Not Supported 00:15:45.438 Memory Page Size Minimum: 4096 bytes 00:15:45.438 Memory Page Size Maximum: 4096 bytes 00:15:45.438 Persistent Memory Region: Not Supported 00:15:45.438 Optional Asynchronous Events Supported 00:15:45.438 Namespace Attribute Notices: Supported 00:15:45.438 Firmware Activation Notices: Not Supported 00:15:45.438 ANA Change Notices: Supported 00:15:45.438 PLE Aggregate Log Change Notices: Not Supported 00:15:45.438 LBA Status Info Alert Notices: Not Supported 00:15:45.438 EGE Aggregate Log Change Notices: Not Supported 00:15:45.438 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.438 Zone Descriptor Change Notices: Not Supported 00:15:45.438 Discovery Log Change Notices: Not Supported 00:15:45.438 Controller Attributes 00:15:45.438 128-bit Host Identifier: Supported 00:15:45.438 Non-Operational Permissive Mode: Not Supported 00:15:45.438 NVM Sets: Not Supported 00:15:45.438 Read Recovery Levels: Not Supported 00:15:45.438 Endurance Groups: Not Supported 00:15:45.438 Predictable Latency Mode: Not Supported 00:15:45.438 Traffic Based Keep ALive: Supported 00:15:45.438 Namespace Granularity: Not Supported 00:15:45.438 SQ Associations: Not Supported 00:15:45.438 UUID List: Not Supported 00:15:45.438 Multi-Domain Subsystem: Not Supported 00:15:45.438 Fixed Capacity Management: Not Supported 00:15:45.438 Variable Capacity Management: Not Supported 00:15:45.438 Delete Endurance Group: Not Supported 00:15:45.438 Delete NVM Set: Not Supported 00:15:45.438 Extended LBA Formats Supported: Not Supported 00:15:45.438 Flexible Data Placement Supported: Not Supported 00:15:45.438 00:15:45.438 Controller Memory Buffer Support 00:15:45.438 ================================ 00:15:45.438 Supported: No 00:15:45.438 00:15:45.438 Persistent Memory Region Support 00:15:45.438 ================================ 00:15:45.438 Supported: No 00:15:45.438 00:15:45.438 Admin Command Set Attributes 00:15:45.438 ============================ 00:15:45.438 Security Send/Receive: Not Supported 00:15:45.438 Format NVM: Not Supported 00:15:45.438 Firmware Activate/Download: Not Supported 00:15:45.438 Namespace Management: Not Supported 00:15:45.438 Device Self-Test: Not Supported 00:15:45.438 Directives: Not Supported 00:15:45.438 NVMe-MI: Not Supported 00:15:45.438 Virtualization Management: Not Supported 00:15:45.438 Doorbell Buffer Config: Not Supported 00:15:45.438 Get LBA Status Capability: Not Supported 00:15:45.438 Command & Feature Lockdown Capability: Not Supported 00:15:45.438 Abort Command Limit: 4 00:15:45.438 Async Event Request Limit: 4 00:15:45.438 Number of Firmware Slots: N/A 00:15:45.438 Firmware Slot 1 Read-Only: N/A 00:15:45.438 Firmware Activation Without Reset: N/A 00:15:45.438 Multiple Update Detection Support: N/A 00:15:45.438 Firmware Update Granularity: No Information Provided 00:15:45.438 Per-Namespace SMART Log: Yes 00:15:45.438 Asymmetric Namespace Access Log Page: Supported 00:15:45.438 ANA Transition Time : 10 sec 00:15:45.438 00:15:45.438 Asymmetric Namespace Access Capabilities 00:15:45.438 ANA Optimized State : Supported 00:15:45.438 ANA Non-Optimized State : Supported 00:15:45.438 ANA Inaccessible State : Supported 00:15:45.438 ANA Persistent Loss State : Supported 00:15:45.438 ANA Change State : Supported 00:15:45.438 ANAGRPID is not changed : No 00:15:45.438 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:45.438 00:15:45.438 ANA Group Identifier Maximum : 128 00:15:45.438 Number of ANA Group Identifiers : 128 00:15:45.438 Max Number of Allowed Namespaces : 1024 00:15:45.438 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:45.438 Command Effects Log Page: Supported 00:15:45.438 Get Log Page Extended Data: Supported 00:15:45.438 Telemetry Log Pages: Not Supported 00:15:45.438 Persistent Event Log Pages: Not Supported 00:15:45.438 Supported Log Pages Log Page: May Support 00:15:45.438 Commands Supported & Effects Log Page: Not Supported 00:15:45.438 Feature Identifiers & Effects Log Page:May Support 00:15:45.438 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.438 Data Area 4 for Telemetry Log: Not Supported 00:15:45.438 Error Log Page Entries Supported: 128 00:15:45.438 Keep Alive: Supported 00:15:45.438 Keep Alive Granularity: 1000 ms 00:15:45.438 00:15:45.438 NVM Command Set Attributes 00:15:45.439 ========================== 00:15:45.439 Submission Queue Entry Size 00:15:45.439 Max: 64 00:15:45.439 Min: 64 00:15:45.439 Completion Queue Entry Size 00:15:45.439 Max: 16 00:15:45.439 Min: 16 00:15:45.439 Number of Namespaces: 1024 00:15:45.439 Compare Command: Not Supported 00:15:45.439 Write Uncorrectable Command: Not Supported 00:15:45.439 Dataset Management Command: Supported 00:15:45.439 Write Zeroes Command: Supported 00:15:45.439 Set Features Save Field: Not Supported 00:15:45.439 Reservations: Not Supported 00:15:45.439 Timestamp: Not Supported 00:15:45.439 Copy: Not Supported 00:15:45.439 Volatile Write Cache: Present 00:15:45.439 Atomic Write Unit (Normal): 1 00:15:45.439 Atomic Write Unit (PFail): 1 00:15:45.439 Atomic Compare & Write Unit: 1 00:15:45.439 Fused Compare & Write: Not Supported 00:15:45.439 Scatter-Gather List 00:15:45.439 SGL Command Set: Supported 00:15:45.439 SGL Keyed: Not Supported 00:15:45.439 SGL Bit Bucket Descriptor: Not Supported 00:15:45.439 SGL Metadata Pointer: Not Supported 00:15:45.439 Oversized SGL: Not Supported 00:15:45.439 SGL Metadata Address: Not Supported 00:15:45.439 SGL Offset: Supported 00:15:45.439 Transport SGL Data Block: Not Supported 00:15:45.439 Replay Protected Memory Block: Not Supported 00:15:45.439 00:15:45.439 Firmware Slot Information 00:15:45.439 ========================= 00:15:45.439 Active slot: 0 00:15:45.439 00:15:45.439 Asymmetric Namespace Access 00:15:45.439 =========================== 00:15:45.439 Change Count : 0 00:15:45.439 Number of ANA Group Descriptors : 1 00:15:45.439 ANA Group Descriptor : 0 00:15:45.439 ANA Group ID : 1 00:15:45.439 Number of NSID Values : 1 00:15:45.439 Change Count : 0 00:15:45.439 ANA State : 1 00:15:45.439 Namespace Identifier : 1 00:15:45.439 00:15:45.439 Commands Supported and Effects 00:15:45.439 ============================== 00:15:45.439 Admin Commands 00:15:45.439 -------------- 00:15:45.439 Get Log Page (02h): Supported 00:15:45.439 Identify (06h): Supported 00:15:45.439 Abort (08h): Supported 00:15:45.439 Set Features (09h): Supported 00:15:45.439 Get Features (0Ah): Supported 00:15:45.439 Asynchronous Event Request (0Ch): Supported 00:15:45.439 Keep Alive (18h): Supported 00:15:45.439 I/O Commands 00:15:45.439 ------------ 00:15:45.439 Flush (00h): Supported 00:15:45.439 Write (01h): Supported LBA-Change 00:15:45.439 Read (02h): Supported 00:15:45.439 Write Zeroes (08h): Supported LBA-Change 00:15:45.439 Dataset Management (09h): Supported 00:15:45.439 00:15:45.439 Error Log 00:15:45.439 ========= 00:15:45.439 Entry: 0 00:15:45.439 Error Count: 0x3 00:15:45.439 Submission Queue Id: 0x0 00:15:45.439 Command Id: 0x5 00:15:45.439 Phase Bit: 0 00:15:45.439 Status Code: 0x2 00:15:45.439 Status Code Type: 0x0 00:15:45.439 Do Not Retry: 1 00:15:45.439 Error Location: 0x28 00:15:45.439 LBA: 0x0 00:15:45.439 Namespace: 0x0 00:15:45.439 Vendor Log Page: 0x0 00:15:45.439 ----------- 00:15:45.439 Entry: 1 00:15:45.439 Error Count: 0x2 00:15:45.439 Submission Queue Id: 0x0 00:15:45.439 Command Id: 0x5 00:15:45.439 Phase Bit: 0 00:15:45.439 Status Code: 0x2 00:15:45.439 Status Code Type: 0x0 00:15:45.439 Do Not Retry: 1 00:15:45.439 Error Location: 0x28 00:15:45.439 LBA: 0x0 00:15:45.439 Namespace: 0x0 00:15:45.439 Vendor Log Page: 0x0 00:15:45.439 ----------- 00:15:45.439 Entry: 2 00:15:45.439 Error Count: 0x1 00:15:45.439 Submission Queue Id: 0x0 00:15:45.439 Command Id: 0x4 00:15:45.439 Phase Bit: 0 00:15:45.439 Status Code: 0x2 00:15:45.439 Status Code Type: 0x0 00:15:45.439 Do Not Retry: 1 00:15:45.439 Error Location: 0x28 00:15:45.439 LBA: 0x0 00:15:45.439 Namespace: 0x0 00:15:45.439 Vendor Log Page: 0x0 00:15:45.439 00:15:45.439 Number of Queues 00:15:45.439 ================ 00:15:45.439 Number of I/O Submission Queues: 128 00:15:45.439 Number of I/O Completion Queues: 128 00:15:45.439 00:15:45.439 ZNS Specific Controller Data 00:15:45.439 ============================ 00:15:45.439 Zone Append Size Limit: 0 00:15:45.439 00:15:45.439 00:15:45.439 Active Namespaces 00:15:45.439 ================= 00:15:45.439 get_feature(0x05) failed 00:15:45.439 Namespace ID:1 00:15:45.439 Command Set Identifier: NVM (00h) 00:15:45.439 Deallocate: Supported 00:15:45.439 Deallocated/Unwritten Error: Not Supported 00:15:45.439 Deallocated Read Value: Unknown 00:15:45.439 Deallocate in Write Zeroes: Not Supported 00:15:45.439 Deallocated Guard Field: 0xFFFF 00:15:45.439 Flush: Supported 00:15:45.439 Reservation: Not Supported 00:15:45.439 Namespace Sharing Capabilities: Multiple Controllers 00:15:45.439 Size (in LBAs): 1310720 (5GiB) 00:15:45.439 Capacity (in LBAs): 1310720 (5GiB) 00:15:45.439 Utilization (in LBAs): 1310720 (5GiB) 00:15:45.439 UUID: 4b074548-d4af-435c-b720-ed4243ea4f1d 00:15:45.439 Thin Provisioning: Not Supported 00:15:45.439 Per-NS Atomic Units: Yes 00:15:45.439 Atomic Boundary Size (Normal): 0 00:15:45.439 Atomic Boundary Size (PFail): 0 00:15:45.439 Atomic Boundary Offset: 0 00:15:45.439 NGUID/EUI64 Never Reused: No 00:15:45.439 ANA group ID: 1 00:15:45.439 Namespace Write Protected: No 00:15:45.439 Number of LBA Formats: 1 00:15:45.439 Current LBA Format: LBA Format #00 00:15:45.439 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:45.439 00:15:45.439 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:45.439 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:45.439 16:19:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:45.439 rmmod nvme_tcp 00:15:45.439 rmmod nvme_fabrics 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:45.439 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:15:45.440 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:15:45.698 16:19:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:46.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.264 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.264 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.523 00:15:46.523 real 0m2.714s 00:15:46.523 user 0m0.936s 00:15:46.523 sys 0m1.286s 00:15:46.523 16:19:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.523 16:19:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.523 ************************************ 00:15:46.523 END TEST nvmf_identify_kernel_target 00:15:46.523 ************************************ 00:15:46.523 16:19:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:46.523 16:19:30 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:46.523 16:19:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.523 16:19:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.523 16:19:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.523 ************************************ 00:15:46.523 START TEST nvmf_auth_host 00:15:46.523 ************************************ 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:46.523 * Looking for test storage... 00:15:46.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.523 16:19:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:46.524 Cannot find device "nvmf_tgt_br" 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.524 Cannot find device "nvmf_tgt_br2" 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:46.524 Cannot find device "nvmf_tgt_br" 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:46.524 Cannot find device "nvmf_tgt_br2" 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:15:46.524 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:46.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:46.782 00:15:46.782 --- 10.0.0.2 ping statistics --- 00:15:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.782 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:46.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:46.782 00:15:46.782 --- 10.0.0.3 ping statistics --- 00:15:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.782 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:15:46.782 00:15:46.782 --- 10.0.0.1 ping statistics --- 00:15:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.782 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.782 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77989 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77989 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 77989 ']' 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.040 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d37123e059619b2035cdd5247f8f8282 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Cvu 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d37123e059619b2035cdd5247f8f8282 0 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d37123e059619b2035cdd5247f8f8282 0 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d37123e059619b2035cdd5247f8f8282 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Cvu 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Cvu 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Cvu 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10fc75e294a9bf850d91c70c574f7b9f53dda55a42a8a3bdce575d7f232e9507 00:15:47.297 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uLK 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10fc75e294a9bf850d91c70c574f7b9f53dda55a42a8a3bdce575d7f232e9507 3 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10fc75e294a9bf850d91c70c574f7b9f53dda55a42a8a3bdce575d7f232e9507 3 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10fc75e294a9bf850d91c70c574f7b9f53dda55a42a8a3bdce575d7f232e9507 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:15:47.297 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uLK 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uLK 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uLK 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2bb5b0ee388cf5eb71a67aaedbd619bda75ce0045b8a99e8 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.L15 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2bb5b0ee388cf5eb71a67aaedbd619bda75ce0045b8a99e8 0 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2bb5b0ee388cf5eb71a67aaedbd619bda75ce0045b8a99e8 0 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2bb5b0ee388cf5eb71a67aaedbd619bda75ce0045b8a99e8 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.L15 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.L15 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.L15 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d78f6a485489fcf603eed039aa9e4ff8446b02d626b5369 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MFH 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d78f6a485489fcf603eed039aa9e4ff8446b02d626b5369 2 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d78f6a485489fcf603eed039aa9e4ff8446b02d626b5369 2 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d78f6a485489fcf603eed039aa9e4ff8446b02d626b5369 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MFH 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MFH 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MFH 00:15:47.555 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=28896564025c81904d2cb118c24fbd05 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zWc 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 28896564025c81904d2cb118c24fbd05 1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 28896564025c81904d2cb118c24fbd05 1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=28896564025c81904d2cb118c24fbd05 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zWc 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zWc 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zWc 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=913836350b2786b117c29f3f790bbc1c 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wVi 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 913836350b2786b117c29f3f790bbc1c 1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 913836350b2786b117c29f3f790bbc1c 1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=913836350b2786b117c29f3f790bbc1c 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:15:47.556 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wVi 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wVi 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wVi 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5ef6bd22a60e2e3198ba6c88a5d9ab986c056eca75b5c600 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Myc 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5ef6bd22a60e2e3198ba6c88a5d9ab986c056eca75b5c600 2 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5ef6bd22a60e2e3198ba6c88a5d9ab986c056eca75b5c600 2 00:15:47.813 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5ef6bd22a60e2e3198ba6c88a5d9ab986c056eca75b5c600 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Myc 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Myc 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Myc 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d6b63924f69557a73ac512670b474dbb 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kDf 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d6b63924f69557a73ac512670b474dbb 0 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d6b63924f69557a73ac512670b474dbb 0 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d6b63924f69557a73ac512670b474dbb 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kDf 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kDf 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kDf 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=904d474cee0cef3305e30bbf0687a753034d035af31ccf2e13e6dbe292dab435 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jVG 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 904d474cee0cef3305e30bbf0687a753034d035af31ccf2e13e6dbe292dab435 3 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 904d474cee0cef3305e30bbf0687a753034d035af31ccf2e13e6dbe292dab435 3 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=904d474cee0cef3305e30bbf0687a753034d035af31ccf2e13e6dbe292dab435 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jVG 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jVG 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jVG 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77989 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 77989 ']' 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.814 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Cvu 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uLK ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uLK 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.L15 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MFH ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MFH 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zWc 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wVi ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wVi 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Myc 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kDf ]] 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kDf 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jVG 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:48.330 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:48.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:48.588 Waiting for block devices as requested 00:15:48.588 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:48.845 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:49.413 16:19:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:49.413 No valid GPT data, bailing 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:49.413 No valid GPT data, bailing 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:49.413 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:49.671 No valid GPT data, bailing 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:49.671 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:49.672 No valid GPT data, bailing 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -a 10.0.0.1 -t tcp -s 4420 00:15:49.672 00:15:49.672 Discovery Log Number of Records 2, Generation counter 2 00:15:49.672 =====Discovery Log Entry 0====== 00:15:49.672 trtype: tcp 00:15:49.672 adrfam: ipv4 00:15:49.672 subtype: current discovery subsystem 00:15:49.672 treq: not specified, sq flow control disable supported 00:15:49.672 portid: 1 00:15:49.672 trsvcid: 4420 00:15:49.672 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:49.672 traddr: 10.0.0.1 00:15:49.672 eflags: none 00:15:49.672 sectype: none 00:15:49.672 =====Discovery Log Entry 1====== 00:15:49.672 trtype: tcp 00:15:49.672 adrfam: ipv4 00:15:49.672 subtype: nvme subsystem 00:15:49.672 treq: not specified, sq flow control disable supported 00:15:49.672 portid: 1 00:15:49.672 trsvcid: 4420 00:15:49.672 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:49.672 traddr: 10.0.0.1 00:15:49.672 eflags: none 00:15:49.672 sectype: none 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:49.672 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.931 nvme0n1 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:49.931 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.932 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 nvme0n1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 nvme0n1 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.451 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.451 nvme0n1 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:50.451 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.452 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 nvme0n1 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:50.710 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.711 nvme0n1 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.711 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:50.970 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.229 nvme0n1 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.229 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:51.488 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.489 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 nvme0n1 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.489 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 nvme0n1 00:15:51.748 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.748 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.748 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.748 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.748 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:51.749 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.750 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:51.751 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.751 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.009 nvme0n1 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.009 nvme0n1 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.009 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.267 16:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.834 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.093 nvme0n1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.093 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.363 nvme0n1 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.363 16:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 nvme0n1 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.622 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 nvme0n1 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.881 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 nvme0n1 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:54.139 16:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.089 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.347 nvme0n1 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.347 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.348 16:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.605 nvme0n1 00:15:56.605 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.605 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.606 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:56.864 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:56.865 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.865 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.865 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.123 nvme0n1 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:15:57.123 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.124 16:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.691 nvme0n1 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.691 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.950 nvme0n1 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.950 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.209 16:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.775 nvme0n1 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:58.775 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.776 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 nvme0n1 00:15:59.341 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.341 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.341 16:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.341 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.341 16:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.341 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.599 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.166 nvme0n1 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.166 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.731 nvme0n1 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.731 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.989 16:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.555 nvme0n1 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.555 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.556 nvme0n1 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.556 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 nvme0n1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.815 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.075 nvme0n1 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.075 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 nvme0n1 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 nvme0n1 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 16:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:02.334 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.335 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.594 nvme0n1 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.594 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.595 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.854 nvme0n1 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.854 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 nvme0n1 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 nvme0n1 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.112 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.371 16:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 nvme0n1 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.371 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.630 nvme0n1 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.630 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.889 nvme0n1 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.889 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.148 nvme0n1 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.148 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.407 16:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 nvme0n1 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.665 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.666 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 nvme0n1 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.924 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.182 nvme0n1 00:16:05.182 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.182 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.182 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.182 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.182 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.182 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.440 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.440 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.440 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.440 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.441 16:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.699 nvme0n1 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.699 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.267 nvme0n1 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.267 16:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.525 nvme0n1 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.525 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.091 nvme0n1 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:07.091 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.092 16:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 nvme0n1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.658 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.224 nvme0n1 00:16:08.224 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.224 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.224 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.224 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.224 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.224 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.482 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.482 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.482 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.483 16:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.483 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.483 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 nvme0n1 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.049 16:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.616 nvme0n1 00:16:09.616 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.616 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.616 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.616 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.616 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.616 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.873 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.874 16:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 nvme0n1 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.463 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.721 nvme0n1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.721 nvme0n1 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.721 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.722 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.722 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.722 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.722 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.722 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.722 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.980 nvme0n1 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.980 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.238 nvme0n1 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.238 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.239 nvme0n1 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.239 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.497 16:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.497 nvme0n1 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:11.497 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.498 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.756 nvme0n1 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.756 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.757 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.757 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.757 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.014 nvme0n1 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.014 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.271 nvme0n1 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.271 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.272 nvme0n1 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.272 16:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.530 nvme0n1 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.530 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.789 nvme0n1 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.789 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.047 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.048 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.306 nvme0n1 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.306 16:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.565 nvme0n1 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.565 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.824 nvme0n1 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.824 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.083 nvme0n1 00:16:14.083 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.083 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.083 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.083 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.083 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.083 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.341 16:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.599 nvme0n1 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.599 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.166 nvme0n1 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.166 16:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 nvme0n1 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.424 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.033 nvme0n1 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM3MTIzZTA1OTYxOWIyMDM1Y2RkNTI0N2Y4ZjgyODLsnOFI: 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: ]] 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBmYzc1ZTI5NGE5YmY4NTBkOTFjNzBjNTc0ZjdiOWY1M2RkYTU1YTQyYThhM2JkY2U1NzVkN2YyMzJlOTUwN31S8gU=: 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:16.033 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.034 16:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 nvme0n1 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.599 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.600 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.600 16:20:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.600 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.600 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.600 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.532 nvme0n1 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjg4OTY1NjQwMjVjODE5MDRkMmNiMTE4YzI0ZmJkMDUQLYje: 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: ]] 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTEzODM2MzUwYjI3ODZiMTE3YzI5ZjNmNzkwYmJjMWPdSQks: 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.532 16:20:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.532 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 nvme0n1 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWVmNmJkMjJhNjBlMmUzMTk4YmE2Yzg4YTVkOWFiOTg2YzA1NmVjYTc1YjVjNjAwofhlAA==: 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZiNjM5MjRmNjk1NTdhNzNhYzUxMjY3MGI0NzRkYmLao3HF: 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.098 16:20:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 nvme0n1 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA0ZDQ3NGNlZTBjZWYzMzA1ZTMwYmJmMDY4N2E3NTMwMzRkMDM1YWYzMWNjZjJlMTNlNmRiZTI5MmRhYjQzNbrfc0I=: 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.922 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.923 16:20:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.502 nvme0n1 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJiNWIwZWUzODhjZjVlYjcxYTY3YWFlZGJkNjE5YmRhNzVjZTAwNDViOGE5OWU4u+GojA==: 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ3OGY2YTQ4NTQ4OWZjZjYwM2VlZDAzOWFhOWU0ZmY4NDQ2YjAyZDYyNmI1MzY5ZGX/JA==: 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.502 request: 00:16:19.502 { 00:16:19.502 "name": "nvme0", 00:16:19.502 "trtype": "tcp", 00:16:19.502 "traddr": "10.0.0.1", 00:16:19.502 "adrfam": "ipv4", 00:16:19.502 "trsvcid": "4420", 00:16:19.502 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:19.502 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:19.502 "prchk_reftag": false, 00:16:19.502 "prchk_guard": false, 00:16:19.502 "hdgst": false, 00:16:19.502 "ddgst": false, 00:16:19.502 "method": "bdev_nvme_attach_controller", 00:16:19.502 "req_id": 1 00:16:19.502 } 00:16:19.502 Got JSON-RPC error response 00:16:19.502 response: 00:16:19.502 { 00:16:19.502 "code": -5, 00:16:19.502 "message": "Input/output error" 00:16:19.502 } 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:19.502 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.503 request: 00:16:19.503 { 00:16:19.503 "name": "nvme0", 00:16:19.503 "trtype": "tcp", 00:16:19.503 "traddr": "10.0.0.1", 00:16:19.503 "adrfam": "ipv4", 00:16:19.503 "trsvcid": "4420", 00:16:19.503 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:19.503 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:19.503 "prchk_reftag": false, 00:16:19.503 "prchk_guard": false, 00:16:19.503 "hdgst": false, 00:16:19.503 "ddgst": false, 00:16:19.503 "dhchap_key": "key2", 00:16:19.503 "method": "bdev_nvme_attach_controller", 00:16:19.503 "req_id": 1 00:16:19.503 } 00:16:19.503 Got JSON-RPC error response 00:16:19.503 response: 00:16:19.503 { 00:16:19.503 "code": -5, 00:16:19.503 "message": "Input/output error" 00:16:19.503 } 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.503 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.762 request: 00:16:19.762 { 00:16:19.762 "name": "nvme0", 00:16:19.762 "trtype": "tcp", 00:16:19.762 "traddr": "10.0.0.1", 00:16:19.762 "adrfam": "ipv4", 00:16:19.762 "trsvcid": "4420", 00:16:19.762 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:19.762 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:19.762 "prchk_reftag": false, 00:16:19.762 "prchk_guard": false, 00:16:19.762 "hdgst": false, 00:16:19.762 "ddgst": false, 00:16:19.762 "dhchap_key": "key1", 00:16:19.762 "dhchap_ctrlr_key": "ckey2", 00:16:19.762 "method": "bdev_nvme_attach_controller", 00:16:19.762 "req_id": 1 00:16:19.762 } 00:16:19.762 Got JSON-RPC error response 00:16:19.762 response: 00:16:19.762 { 00:16:19.762 "code": -5, 00:16:19.762 "message": "Input/output error" 00:16:19.762 } 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.762 rmmod nvme_tcp 00:16:19.762 rmmod nvme_fabrics 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77989 ']' 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77989 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 77989 ']' 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 77989 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77989 00:16:19.762 killing process with pid 77989 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77989' 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 77989 00:16:19.762 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 77989 00:16:20.020 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.020 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:20.021 16:20:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:20.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.846 16:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Cvu /tmp/spdk.key-null.L15 /tmp/spdk.key-sha256.zWc /tmp/spdk.key-sha384.Myc /tmp/spdk.key-sha512.jVG /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:20.846 16:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:21.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:21.413 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:21.413 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:21.413 00:16:21.413 real 0m34.865s 00:16:21.413 user 0m31.240s 00:16:21.413 sys 0m3.561s 00:16:21.413 16:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.413 ************************************ 00:16:21.413 END TEST nvmf_auth_host 00:16:21.413 ************************************ 00:16:21.413 16:20:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.413 16:20:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:21.413 16:20:04 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:16:21.413 16:20:04 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:21.413 16:20:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:21.413 16:20:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.413 16:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.413 ************************************ 00:16:21.413 START TEST nvmf_digest 00:16:21.413 ************************************ 00:16:21.413 16:20:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:21.413 * Looking for test storage... 00:16:21.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:21.413 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:21.414 Cannot find device "nvmf_tgt_br" 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.414 Cannot find device "nvmf_tgt_br2" 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:16:21.414 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:21.672 Cannot find device "nvmf_tgt_br" 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:21.672 Cannot find device "nvmf_tgt_br2" 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.672 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:21.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:21.931 00:16:21.931 --- 10.0.0.2 ping statistics --- 00:16:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.931 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:21.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:21.931 00:16:21.931 --- 10.0.0.3 ping statistics --- 00:16:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.931 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:21.931 00:16:21.931 --- 10.0.0.1 ping statistics --- 00:16:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.931 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:21.931 ************************************ 00:16:21.931 START TEST nvmf_digest_clean 00:16:21.931 ************************************ 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79553 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79553 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79553 ']' 00:16:21.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.931 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:21.931 [2024-07-12 16:20:05.552139] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:21.931 [2024-07-12 16:20:05.552231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.201 [2024-07-12 16:20:05.693385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.201 [2024-07-12 16:20:05.763440] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.201 [2024-07-12 16:20:05.763494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.201 [2024-07-12 16:20:05.763508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.201 [2024-07-12 16:20:05.763518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.201 [2024-07-12 16:20:05.763527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.201 [2024-07-12 16:20:05.763560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.201 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:22.201 [2024-07-12 16:20:05.878172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:22.201 null0 00:16:22.201 [2024-07-12 16:20:05.915552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.463 [2024-07-12 16:20:05.939656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79582 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79582 /var/tmp/bperf.sock 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79582 ']' 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:22.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.463 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:22.463 [2024-07-12 16:20:06.007714] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:22.463 [2024-07-12 16:20:06.008074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79582 ] 00:16:22.463 [2024-07-12 16:20:06.148017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.720 [2024-07-12 16:20:06.217522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.720 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.720 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:22.720 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:22.720 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:22.720 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:22.977 [2024-07-12 16:20:06.567022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:22.977 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.977 16:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.542 nvme0n1 00:16:23.542 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:23.542 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:23.542 Running I/O for 2 seconds... 00:16:25.441 00:16:25.441 Latency(us) 00:16:25.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.441 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:25.441 nvme0n1 : 2.01 14521.31 56.72 0.00 0.00 8807.00 8221.79 20137.43 00:16:25.441 =================================================================================================================== 00:16:25.441 Total : 14521.31 56.72 0.00 0.00 8807.00 8221.79 20137.43 00:16:25.441 0 00:16:25.441 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:25.441 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:25.441 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:25.441 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:25.441 | select(.opcode=="crc32c") 00:16:25.441 | "\(.module_name) \(.executed)"' 00:16:25.441 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79582 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79582 ']' 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79582 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79582 00:16:26.009 killing process with pid 79582 00:16:26.009 Received shutdown signal, test time was about 2.000000 seconds 00:16:26.009 00:16:26.009 Latency(us) 00:16:26.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.009 =================================================================================================================== 00:16:26.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79582' 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79582 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79582 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79629 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79629 /var/tmp/bperf.sock 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79629 ']' 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.009 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:26.009 [2024-07-12 16:20:09.675788] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:26.009 [2024-07-12 16:20:09.676167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79629 ] 00:16:26.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:26.009 Zero copy mechanism will not be used. 00:16:26.268 [2024-07-12 16:20:09.814497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.268 [2024-07-12 16:20:09.872657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.201 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.201 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:27.201 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:27.201 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:27.201 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:27.459 [2024-07-12 16:20:10.957644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:27.459 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.459 16:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.723 nvme0n1 00:16:27.723 16:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:27.723 16:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:27.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:27.723 Zero copy mechanism will not be used. 00:16:27.723 Running I/O for 2 seconds... 00:16:30.249 00:16:30.249 Latency(us) 00:16:30.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.250 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:30.250 nvme0n1 : 2.00 7589.45 948.68 0.00 0.00 2104.44 1980.97 7923.90 00:16:30.250 =================================================================================================================== 00:16:30.250 Total : 7589.45 948.68 0.00 0.00 2104.44 1980.97 7923.90 00:16:30.250 0 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:30.250 | select(.opcode=="crc32c") 00:16:30.250 | "\(.module_name) \(.executed)"' 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79629 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79629 ']' 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79629 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79629 00:16:30.250 killing process with pid 79629 00:16:30.250 Received shutdown signal, test time was about 2.000000 seconds 00:16:30.250 00:16:30.250 Latency(us) 00:16:30.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.250 =================================================================================================================== 00:16:30.250 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79629' 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79629 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79629 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79691 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79691 /var/tmp/bperf.sock 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79691 ']' 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:30.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.250 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:30.508 [2024-07-12 16:20:14.008142] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:30.508 [2024-07-12 16:20:14.008471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79691 ] 00:16:30.508 [2024-07-12 16:20:14.139532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.508 [2024-07-12 16:20:14.194025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.444 16:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.444 16:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:31.444 16:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:31.444 16:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:31.444 16:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:31.703 [2024-07-12 16:20:15.177663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:31.703 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.703 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.961 nvme0n1 00:16:31.961 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:31.961 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.961 Running I/O for 2 seconds... 00:16:34.497 00:16:34.497 Latency(us) 00:16:34.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.497 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.497 nvme0n1 : 2.01 16253.82 63.49 0.00 0.00 7868.05 4259.84 17039.36 00:16:34.497 =================================================================================================================== 00:16:34.497 Total : 16253.82 63.49 0.00 0.00 7868.05 4259.84 17039.36 00:16:34.497 0 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:34.497 | select(.opcode=="crc32c") 00:16:34.497 | "\(.module_name) \(.executed)"' 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79691 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79691 ']' 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79691 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79691 00:16:34.497 killing process with pid 79691 00:16:34.497 Received shutdown signal, test time was about 2.000000 seconds 00:16:34.497 00:16:34.497 Latency(us) 00:16:34.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.497 =================================================================================================================== 00:16:34.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79691' 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79691 00:16:34.497 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79691 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79751 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79751 /var/tmp/bperf.sock 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79751 ']' 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:34.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.497 16:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:34.497 [2024-07-12 16:20:18.124945] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:34.497 [2024-07-12 16:20:18.125182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:34.497 Zero copy mechanism will not be used. 00:16:34.497 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79751 ] 00:16:34.756 [2024-07-12 16:20:18.255994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.756 [2024-07-12 16:20:18.308591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:35.690 [2024-07-12 16:20:19.283348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:35.690 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:35.948 nvme0n1 00:16:35.948 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:35.948 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:36.207 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:36.207 Zero copy mechanism will not be used. 00:16:36.207 Running I/O for 2 seconds... 00:16:38.108 00:16:38.108 Latency(us) 00:16:38.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.108 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:38.108 nvme0n1 : 2.00 6685.35 835.67 0.00 0.00 2387.45 1720.32 3842.79 00:16:38.108 =================================================================================================================== 00:16:38.108 Total : 6685.35 835.67 0.00 0.00 2387.45 1720.32 3842.79 00:16:38.108 0 00:16:38.108 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:38.108 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:38.108 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:38.108 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:38.108 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:38.108 | select(.opcode=="crc32c") 00:16:38.108 | "\(.module_name) \(.executed)"' 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79751 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79751 ']' 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79751 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79751 00:16:38.367 killing process with pid 79751 00:16:38.367 Received shutdown signal, test time was about 2.000000 seconds 00:16:38.367 00:16:38.367 Latency(us) 00:16:38.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.367 =================================================================================================================== 00:16:38.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79751' 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79751 00:16:38.367 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79751 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79553 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79553 ']' 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79553 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79553 00:16:38.627 killing process with pid 79553 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79553' 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79553 00:16:38.627 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79553 00:16:38.887 00:16:38.887 real 0m16.951s 00:16:38.887 user 0m33.630s 00:16:38.887 sys 0m4.356s 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:38.887 ************************************ 00:16:38.887 END TEST nvmf_digest_clean 00:16:38.887 ************************************ 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:38.887 ************************************ 00:16:38.887 START TEST nvmf_digest_error 00:16:38.887 ************************************ 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:38.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79829 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79829 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79829 ']' 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.887 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:38.887 [2024-07-12 16:20:22.543134] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:38.887 [2024-07-12 16:20:22.543214] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.146 [2024-07-12 16:20:22.682013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.146 [2024-07-12 16:20:22.742380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.146 [2024-07-12 16:20:22.742462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.146 [2024-07-12 16:20:22.742474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.146 [2024-07-12 16:20:22.742483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.146 [2024-07-12 16:20:22.742490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.146 [2024-07-12 16:20:22.742514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 [2024-07-12 16:20:22.838923] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.146 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.404 [2024-07-12 16:20:22.877692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.404 null0 00:16:39.404 [2024-07-12 16:20:22.911814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.404 [2024-07-12 16:20:22.935933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79853 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79853 /var/tmp/bperf.sock 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79853 ']' 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:39.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.404 16:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.404 [2024-07-12 16:20:22.986225] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:39.404 [2024-07-12 16:20:22.986462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79853 ] 00:16:39.404 [2024-07-12 16:20:23.120364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.661 [2024-07-12 16:20:23.179690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.661 [2024-07-12 16:20:23.210665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.661 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.661 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:39.661 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:39.661 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:39.919 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:39.919 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.919 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.919 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.919 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:39.919 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.178 nvme0n1 00:16:40.178 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:40.178 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.178 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.178 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.178 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:40.178 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:40.437 Running I/O for 2 seconds... 00:16:40.437 [2024-07-12 16:20:23.997546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:23.997598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:23.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.014218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.014259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.014280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.030900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.030976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.031016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.048106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.048149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.048170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.065494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.065586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.065609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.081985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.082027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.082050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.098459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.098498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.098517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.114945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.114986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.115007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.132656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.132704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.132728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.437 [2024-07-12 16:20:24.151593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.437 [2024-07-12 16:20:24.151637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.437 [2024-07-12 16:20:24.151658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.696 [2024-07-12 16:20:24.170916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.696 [2024-07-12 16:20:24.170999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.696 [2024-07-12 16:20:24.171038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.696 [2024-07-12 16:20:24.187911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.696 [2024-07-12 16:20:24.187980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.696 [2024-07-12 16:20:24.188018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.696 [2024-07-12 16:20:24.205798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.696 [2024-07-12 16:20:24.205836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.205857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.222279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.222320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.222342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.239523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.239562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.239583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.255734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.255774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.255794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.273683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.273723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.273743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.290643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.290683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.290704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.307255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.307309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.307330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.324374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.324431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.324477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.341549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.341588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.341607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.358797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.358836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.358856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.375613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.375651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.375671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.392196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.392259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.697 [2024-07-12 16:20:24.408523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.697 [2024-07-12 16:20:24.408566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.697 [2024-07-12 16:20:24.408589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.425863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.425960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.425998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.443226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.443266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.443301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.460141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.460183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.460205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.476733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.476782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.476832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.493285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.493325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.493361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.510482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.510526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.510549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.528719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.528764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.528817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.547137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.547180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.547202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.564994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.565064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.565086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.581397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.581502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.581524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.599043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.599082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.599102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.617551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.617591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.617611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.633658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.633697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.633717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.651063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.651101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.651154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.957 [2024-07-12 16:20:24.669057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:40.957 [2024-07-12 16:20:24.669099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.957 [2024-07-12 16:20:24.669120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.216 [2024-07-12 16:20:24.687422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.216 [2024-07-12 16:20:24.687477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.216 [2024-07-12 16:20:24.687498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.216 [2024-07-12 16:20:24.703778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.216 [2024-07-12 16:20:24.703817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.216 [2024-07-12 16:20:24.703836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.216 [2024-07-12 16:20:24.719829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.216 [2024-07-12 16:20:24.719896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.216 [2024-07-12 16:20:24.719918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.216 [2024-07-12 16:20:24.736108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.216 [2024-07-12 16:20:24.736146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.216 [2024-07-12 16:20:24.736166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.216 [2024-07-12 16:20:24.753257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.216 [2024-07-12 16:20:24.753300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.216 [2024-07-12 16:20:24.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.772003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.772045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.772068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.789232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.789272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.789293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.806422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.806461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.806481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.824380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.824426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.824488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.843886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.843949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.843973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.861112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.861152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.861173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.877525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.877564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.877583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.894083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.894127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.894150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.911621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.911659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.911679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.217 [2024-07-12 16:20:24.928426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.217 [2024-07-12 16:20:24.928522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.217 [2024-07-12 16:20:24.928544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.475 [2024-07-12 16:20:24.945340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.475 [2024-07-12 16:20:24.945378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.475 [2024-07-12 16:20:24.945398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:24.962775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:24.962815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:24.962834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:24.979366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:24.979405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:24.979424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:24.995982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:24.996020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:24.996039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.011571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.011609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.011629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.028928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.029006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.029030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.047457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.047503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.047526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.065165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.065204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.065224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.090641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.090683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.090703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.108513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.108559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.108582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.125880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.125966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.126003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.143441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.143484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.143505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.161547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.161588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.161608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.180897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.180951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.180974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.476 [2024-07-12 16:20:25.198790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.476 [2024-07-12 16:20:25.198845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.476 [2024-07-12 16:20:25.198900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.216142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.216182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.216202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.232204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.232245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.232266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.247403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.247442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.247462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.263504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.263542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.263562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.279983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.280070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.280093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.297028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.297067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.297087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.312173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.312213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.312233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.327008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.327045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.734 [2024-07-12 16:20:25.327065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.734 [2024-07-12 16:20:25.342230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.734 [2024-07-12 16:20:25.342270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.342304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.357635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.357674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.357694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.374582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.374645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.390900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.390994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.391031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.407158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.407197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.407217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.422236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.422274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.422293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.439697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.439739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.439789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.735 [2024-07-12 16:20:25.457056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.735 [2024-07-12 16:20:25.457093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.735 [2024-07-12 16:20:25.457112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.477716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.477812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.477846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.497857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.497987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.519464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.519545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.519574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.539466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.539516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.539540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.558820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.558894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.558923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.577368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.577477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.577520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.601680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.601731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.601755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.619835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.619905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.619924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.638188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.638237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.638255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.656755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.656813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.656831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.675179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.675228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.675245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.693268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.693314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.693338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.994 [2024-07-12 16:20:25.711664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:41.994 [2024-07-12 16:20:25.711714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.994 [2024-07-12 16:20:25.711732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.730848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.730910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.252 [2024-07-12 16:20:25.730940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.749690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.749738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.252 [2024-07-12 16:20:25.749757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.768331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.768389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.252 [2024-07-12 16:20:25.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.787598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.787644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.252 [2024-07-12 16:20:25.787661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.805685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.805732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.252 [2024-07-12 16:20:25.805750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.820511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.820551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.252 [2024-07-12 16:20:25.820582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.252 [2024-07-12 16:20:25.835298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.252 [2024-07-12 16:20:25.835332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.835361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.850924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.851005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.851021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.868540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.868580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.868596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.885264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.885297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.885327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.900938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.901017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.901047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.916344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.916380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.916409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.931471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.931506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.931534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.946349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.946411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.962214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.962248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.962279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.253 [2024-07-12 16:20:25.978208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a9f50) 00:16:42.253 [2024-07-12 16:20:25.978261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.253 [2024-07-12 16:20:25.978305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.511 00:16:42.511 Latency(us) 00:16:42.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.511 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:42.511 nvme0n1 : 2.01 14605.28 57.05 0.00 0.00 8757.09 7149.38 34555.35 00:16:42.511 =================================================================================================================== 00:16:42.511 Total : 14605.28 57.05 0.00 0.00 8757.09 7149.38 34555.35 00:16:42.511 0 00:16:42.511 16:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:42.511 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:42.511 | .driver_specific 00:16:42.511 | .nvme_error 00:16:42.511 | .status_code 00:16:42.511 | .command_transient_transport_error' 00:16:42.511 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:42.511 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79853 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79853 ']' 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79853 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79853 00:16:42.770 killing process with pid 79853 00:16:42.770 Received shutdown signal, test time was about 2.000000 seconds 00:16:42.770 00:16:42.770 Latency(us) 00:16:42.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.770 =================================================================================================================== 00:16:42.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79853' 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79853 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79853 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79906 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79906 /var/tmp/bperf.sock 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79906 ']' 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:42.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.770 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:43.028 [2024-07-12 16:20:26.511814] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:43.028 [2024-07-12 16:20:26.512077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79906 ] 00:16:43.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:43.028 Zero copy mechanism will not be used. 00:16:43.028 [2024-07-12 16:20:26.646054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.028 [2024-07-12 16:20:26.701720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.028 [2024-07-12 16:20:26.731533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:43.287 16:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:43.855 nvme0n1 00:16:43.855 16:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:43.855 16:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.855 16:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:43.855 16:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.855 16:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:43.855 16:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:43.855 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:43.855 Zero copy mechanism will not be used. 00:16:43.855 Running I/O for 2 seconds... 00:16:43.855 [2024-07-12 16:20:27.480645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.480713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.480730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.484738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.484793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.484808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.489057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.489112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.489128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.494059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.494097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.494127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.498493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.498530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.498545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.502879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.502943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.502956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.507145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.507181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.507193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.511339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.511376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.511405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.515500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.515536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.515549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.519812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.519848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.519905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.524279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.524317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.524347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.528554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.528593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.532651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.532689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.532719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.537016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.537052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.537082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.541162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.541197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.541226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.545197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.545231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.545259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.549274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.549310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.549339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.553508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.553544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.553573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.557565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.557601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.557629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.561722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.561759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.561788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.855 [2024-07-12 16:20:27.565900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.855 [2024-07-12 16:20:27.565949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.855 [2024-07-12 16:20:27.565979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.856 [2024-07-12 16:20:27.570772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.856 [2024-07-12 16:20:27.570815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.856 [2024-07-12 16:20:27.570846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.856 [2024-07-12 16:20:27.575234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.856 [2024-07-12 16:20:27.575269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.856 [2024-07-12 16:20:27.575281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.856 [2024-07-12 16:20:27.579897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:43.856 [2024-07-12 16:20:27.579979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.856 [2024-07-12 16:20:27.579995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.115 [2024-07-12 16:20:27.584380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.115 [2024-07-12 16:20:27.584417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.115 [2024-07-12 16:20:27.584431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.588907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.588969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.588999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.593182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.593216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.593244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.597205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.597240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.597269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.601162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.601197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.601227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.605081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.605115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.609022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.609055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.609085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.613014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.613048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.613077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.617056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.617090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.617119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.621037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.621073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.621103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.625033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.625067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.625095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.629589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.629644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.629675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.634055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.634091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.634104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.638197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.638232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.638261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.642321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.642356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.642385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.646778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.646829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.646858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.650867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.650913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.650942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.654947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.654981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.655010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.659709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.659747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.659777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.663985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.664021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.664033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.667937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.667970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.667999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.672165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.672214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.672228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.677076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.677111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.677139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.681265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.681300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.681328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.685543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.685579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.685608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.689746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.689799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.689827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.694018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.694053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.694082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.698165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.698200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.698228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.702227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.702262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.116 [2024-07-12 16:20:27.702306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.116 [2024-07-12 16:20:27.706249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.116 [2024-07-12 16:20:27.706299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.706327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.710304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.710339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.710367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.714190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.714225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.714254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.718212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.718246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.718289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.722248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.722283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.722326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.726297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.726331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.726360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.730322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.730356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.730384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.734975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.735073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.735103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.739335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.739369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.739397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.743676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.743714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.743727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.747995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.748029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.748058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.752034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.752068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.752097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.755998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.756032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.756060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.760041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.760074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.760102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.764174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.764210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.764238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.768226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.768261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.768289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.772215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.772249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.772277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.776170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.776217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.776230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.780227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.780262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.780291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.784310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.784344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.784373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.788214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.788249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.788277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.792246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.792281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.796123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.796157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.796186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.800035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.800070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.800113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.804245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.804279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.804308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.808696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.808735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.808749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.813559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.813600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.813615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.818580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.818620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.818637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.823344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.823381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.117 [2024-07-12 16:20:27.823410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.117 [2024-07-12 16:20:27.827915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.117 [2024-07-12 16:20:27.827976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.118 [2024-07-12 16:20:27.828006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.118 [2024-07-12 16:20:27.832492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.118 [2024-07-12 16:20:27.832533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.118 [2024-07-12 16:20:27.832552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.118 [2024-07-12 16:20:27.837186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.118 [2024-07-12 16:20:27.837239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.118 [2024-07-12 16:20:27.837269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.842228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.842270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.842319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.847076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.847114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.847162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.851810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.851848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.851904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.855919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.855954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.855983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.859948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.859982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.860010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.864045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.864080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.864109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.868165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.868230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.873175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.873239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.873266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.878157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.878221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.878249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.379 [2024-07-12 16:20:27.883086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.379 [2024-07-12 16:20:27.883132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.379 [2024-07-12 16:20:27.883145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.888138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.888185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.888198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.892640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.892671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.892684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.897101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.897133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.897147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.901424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.901459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.901472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.905650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.905697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.905709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.909857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.909915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.909928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.914513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.914549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.914563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.919140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.919189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.919203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.923824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.923903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.923919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.928505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.928541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.928554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.933273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.933322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.933335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.938077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.938125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.938137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.942664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.942699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.947787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.947865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.947903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.952053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.952099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.952111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.956312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.956360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.956372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.960638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.960673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.960686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.964997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.965043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.965055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.969119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.969165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.969177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.973238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.973285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.973296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.977960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.978035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.978049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.982221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.982268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.982279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.986370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.986417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.986429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.990437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.990484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.990512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.994972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.995020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.995034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:27.999318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:27.999367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:27.999379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:28.003500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:28.003548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:28.003560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:28.007648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:28.007695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.380 [2024-07-12 16:20:28.007708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.380 [2024-07-12 16:20:28.011890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.380 [2024-07-12 16:20:28.011948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.011961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.015911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.015957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.015968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.019979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.020011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.020022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.023995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.024040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.024053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.028045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.028091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.028103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.032074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.032120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.032132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.036009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.036054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.036066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.040048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.040094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.040106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.045088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.045134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.045162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.049417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.049482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.049494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.053709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.053759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.053788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.057903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.057959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.057971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.061939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.061986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.061999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.066130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.066177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.066189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.070319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.070365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.070377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.074548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.074598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.074610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.078634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.078681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.078693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.082795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.082843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.082855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.086852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.086925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.086938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.091006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.091054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.091066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.095083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.095131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.095144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.099174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.099221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.099233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.381 [2024-07-12 16:20:28.104025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.381 [2024-07-12 16:20:28.104066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.381 [2024-07-12 16:20:28.104080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.640 [2024-07-12 16:20:28.108600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.640 [2024-07-12 16:20:28.108652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.108666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.113109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.113157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.113169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.117168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.117215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.117227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.121163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.121210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.121222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.125243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.125291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.125302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.129342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.129389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.129417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.133500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.133547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.133560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.137612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.137660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.137671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.141656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.141704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.141716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.145815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.145862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.145874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.149765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.149826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.153951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.153998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.154010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.158000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.158046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.158058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.162112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.162158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.162171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.166304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.166351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.166363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.170407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.170454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.170465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.174585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.174632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.174644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.178793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.178851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.182830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.182904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.182918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.186936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.186982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.186995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.190994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.191041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.191052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.195138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.195187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.195199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.199153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.199201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.199213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.203227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.203275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.203301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.208117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.208197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.208225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.212663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.212698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.212711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.217423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.217489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.217501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.222146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.222194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.222207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.227053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.227103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.641 [2024-07-12 16:20:28.227116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.641 [2024-07-12 16:20:28.231819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.641 [2024-07-12 16:20:28.231868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.231909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.236535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.236569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.236584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.241138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.241187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.241199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.245604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.245652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.245663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.250056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.250105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.250117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.254487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.254535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.254547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.258708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.258756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.258768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.262797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.262845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.262857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.267447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.267496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.267509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.271834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.271891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.271904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.275890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.275945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.275958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.280068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.280114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.280125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.284092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.284139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.284151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.288149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.288180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.288191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.292143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.292189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.292201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.296187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.296232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.296244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.301300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.301349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.301360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.305598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.305629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.305641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.309895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.309953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.309965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.314088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.314134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.314146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.318616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.318682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.318695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.323205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.323252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.323264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.327461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.327509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.327521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.331638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.331698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.335871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.335927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.335940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.339908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.339954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.339966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.344631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.344666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.344680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.349161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.349208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.349220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.353483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.353532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.353544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.642 [2024-07-12 16:20:28.357689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.642 [2024-07-12 16:20:28.357738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.642 [2024-07-12 16:20:28.357750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.643 [2024-07-12 16:20:28.362474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.643 [2024-07-12 16:20:28.362524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.643 [2024-07-12 16:20:28.362557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.367286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.367336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.371448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.371513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.371525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.375781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.375830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.375842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.379854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.379911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.379923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.383932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.383979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.383991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.387988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.388034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.388046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.902 [2024-07-12 16:20:28.392070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.902 [2024-07-12 16:20:28.392101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.902 [2024-07-12 16:20:28.392112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.396048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.396094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.396106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.399973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.400016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.403992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.404040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.404052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.408512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.408547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.408560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.413412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.413461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.413473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.417687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.417735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.417747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.422056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.422104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.422116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.426211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.426259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.426270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.430391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.430438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.430465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.434660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.434709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.434721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.438968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.439015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.439026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.443302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.443350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.443362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.447469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.447516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.447528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.451579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.451644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.451656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.455792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.455841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.455853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.459883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.459929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.459941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.463894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.463940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.463952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.467937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.467984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.467995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.471845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.471901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.471913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.475927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.475973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.475985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.479954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.480001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.480012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.483914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.483960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.483972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.487939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.487967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.487979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.491980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.492026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.492038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.495999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.496030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.496042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.500014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.500060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.500071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.504035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.504078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.504090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.508084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.508131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.508143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.903 [2024-07-12 16:20:28.512103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.903 [2024-07-12 16:20:28.512149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.903 [2024-07-12 16:20:28.512161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.516002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.516047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.516059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.520156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.520188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.520200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.524173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.524220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.524231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.528279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.528326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.528337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.532315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.532374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.536351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.536398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.536410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.540380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.540427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.540439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.544512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.544546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.544558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.549029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.549076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.549089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.554046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.554109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.554122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.558437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.558501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.558513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.562729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.562778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.562791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.566986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.567033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.571202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.571250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.571262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.575363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.575410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.575422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.579467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.579516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.579527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.583639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.583688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.583700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.588095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.588143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.588156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.592550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.592586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.592600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.597105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.597155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.597169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.601708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.601758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.601771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.606327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.606377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.606391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.611453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.611502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.611515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.616006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.616057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.616070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.620524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.620568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.620581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:44.904 [2024-07-12 16:20:28.625163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:44.904 [2024-07-12 16:20:28.625212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.904 [2024-07-12 16:20:28.625240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.629894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.629970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.629984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.634479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.634529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.634557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.638793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.638842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.638855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.642959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.643006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.643018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.647340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.647389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.647402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.651409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.651457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.651485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.655861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.655935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.655948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.660194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.660245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.660260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.664679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.664727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.669432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.669483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.669496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.674043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.674091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.674103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.678519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.678567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.678580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.682789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.682837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.682849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.686927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.686973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.686986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.690974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.691020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.691032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.695333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.695392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.699574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.699622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.703952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.703981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.703993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.708115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.708164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.708177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.712265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.712342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.712355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.716476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.716512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.716526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.720615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.720650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.720663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.725021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.725071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.725083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.729311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.729360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.729372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.733559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.733608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.164 [2024-07-12 16:20:28.733620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.164 [2024-07-12 16:20:28.738001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.164 [2024-07-12 16:20:28.738036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.738049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.742133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.742181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.742194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.746256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.746319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.746331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.750482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.750531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.750543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.754756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.754805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.754817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.758850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.758921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.763047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.763109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.763121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.767307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.767355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.767367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.771632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.771680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.771692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.775918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.775966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.775978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.779947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.779994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.780005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.783973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.784020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.784031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.787848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.787904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.787916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.791805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.791852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.791864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.795817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.795864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.795876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.799833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.799889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.799903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.803847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.803902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.807811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.807858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.807869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.811779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.811826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.811837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.815781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.815828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.815839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.819809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.819857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.819868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.824095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.824142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.824154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.828439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.828496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.828509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.833044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.833077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.833088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.837719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.837756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.837770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.842626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.842663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.842677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.847593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.847629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.847642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.852102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.852151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.852163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.856513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.165 [2024-07-12 16:20:28.856549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.165 [2024-07-12 16:20:28.856563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.165 [2024-07-12 16:20:28.860670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.860705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.860718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.166 [2024-07-12 16:20:28.864846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.864902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.864914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.166 [2024-07-12 16:20:28.868896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.868952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.868964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.166 [2024-07-12 16:20:28.873094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.873142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.873153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.166 [2024-07-12 16:20:28.877190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.877238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.877250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.166 [2024-07-12 16:20:28.881274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.881321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.881334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.166 [2024-07-12 16:20:28.885511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.166 [2024-07-12 16:20:28.885560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.166 [2024-07-12 16:20:28.885572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.890115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.890166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.890195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.894589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.894637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.894653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.899222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.899273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.899286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.903527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.903576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.903588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.907801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.907849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.907877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.912602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.912637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.912650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.917539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.917618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.917648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.921785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.921832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.921844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.925829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.925901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.925915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.929961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.930008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.930020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.934059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.934106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.934119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.938159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.938207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.938219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.942122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.942168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.942180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.946553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.946587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.946599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.951222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.951255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.951267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.955769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.955837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.955880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.960533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.960568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.960582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.965307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.965370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.965383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.970027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.970075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.970105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.974704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.974755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.974767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.979030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.979076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.979087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.983177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.983223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.983235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.426 [2024-07-12 16:20:28.987220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.426 [2024-07-12 16:20:28.987266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.426 [2024-07-12 16:20:28.987279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:28.991246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:28.991292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:28.991304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:28.995540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:28.995590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:28.995603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.000516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.000551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.000565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.005288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.005347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.009486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.009534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.009546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.013774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.013823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.013851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.018634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.018701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.018715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.023083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.023129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.023141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.027421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.027487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.027499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.031722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.031769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.031797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.036141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.036188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.036200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.040819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.040887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.040912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.045103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.045150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.045162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.049174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.049221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.049233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.053289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.053336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.053347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.057282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.057329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.057341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.061599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.061647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.061659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.065779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.065827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.065839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.069776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.069822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.069834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.073716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.073763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.073774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.077838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.077925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.081723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.081782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.085954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.086002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.086014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.089916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.089963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.427 [2024-07-12 16:20:29.089975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.427 [2024-07-12 16:20:29.093961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.427 [2024-07-12 16:20:29.094008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.094020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.097953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.098000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.098012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.101971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.102017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.102029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.105905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.105951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.105963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.110112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.110176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.110190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.115227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.115275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.115288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.119697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.119745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.119758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.124056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.124102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.124114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.128228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.128273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.128285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.132325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.132371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.132383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.136724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.136775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.136819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.140867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.140934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.144778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.144840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.144866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.428 [2024-07-12 16:20:29.148953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.428 [2024-07-12 16:20:29.149009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.428 [2024-07-12 16:20:29.149022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.153371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.153419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.153431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.157627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.157675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.157704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.162038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.162085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.162098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.166209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.166257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.166283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.170654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.170702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.170714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.175283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.175330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.175342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.179516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.179564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.179576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.183708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.183755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.183766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.187806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.187854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.191937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.191983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.191994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.196043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.196088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.196100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.200274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.200322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.200366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.204747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.204783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.204811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.208902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.208957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.208969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.212994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.213040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.217211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.217258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.217270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.221301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.221347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.221359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.225399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.225446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.225458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.229589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.229636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.229647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.233806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.233852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.233865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.238141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.238190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.238203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.242646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.242696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.242709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.247267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.247345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.247358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.251951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.251996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.252011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.256681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.256717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.688 [2024-07-12 16:20:29.256730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.688 [2024-07-12 16:20:29.261112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.688 [2024-07-12 16:20:29.261163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.261176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.265474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.265521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.265533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.269968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.270018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.270031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.274352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.274399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.274411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.278442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.278489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.278501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.282791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.282838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.282850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.286859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.286914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.286926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.290928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.290974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.290986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.295024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.295071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.295083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.299033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.299079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.299091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.303686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.303721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.303735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.308228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.308276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.308287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.312379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.312426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.312438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.316550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.316601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.316615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.320910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.320966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.320979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.325038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.325084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.325096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.329210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.329256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.329283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.333434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.333493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.337576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.337623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.337635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.341675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.341722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.341734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.345710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.345757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.345769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.349901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.349957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.349969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.353951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.353996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.354008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.358014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.358061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.358074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.362103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.362149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.362161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.366095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.366140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.366152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.369984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.370029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.370041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.374031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.374078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.374089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.378112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.378159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.378171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.689 [2024-07-12 16:20:29.382195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.689 [2024-07-12 16:20:29.382242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.689 [2024-07-12 16:20:29.382254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.386302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.386350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.386362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.390522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.390584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.390596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.394719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.394766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.394777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.398722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.398768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.398780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.402714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.402761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.402773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.406822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.406869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.406893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.690 [2024-07-12 16:20:29.410983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.690 [2024-07-12 16:20:29.411031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.690 [2024-07-12 16:20:29.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.415454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.415502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.419720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.419771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.419783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.424358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.424395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.424408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.429067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.429117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.429131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.434012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.434049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.434062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.438720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.438805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.443257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.443307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.443319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.447536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.447585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.447597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.451535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.451582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.451594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.455808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.455856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.455882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.460183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.460230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.460242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.464313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.464360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.464372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.468514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.468549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.468563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.472544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.472579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.472592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:45.954 [2024-07-12 16:20:29.477168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd5e810) 00:16:45.954 [2024-07-12 16:20:29.477208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.954 [2024-07-12 16:20:29.477223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:45.954 00:16:45.954 Latency(us) 00:16:45.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.954 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:45.955 nvme0n1 : 2.00 7228.90 903.61 0.00 0.00 2209.80 1690.53 5123.72 00:16:45.955 =================================================================================================================== 00:16:45.955 Total : 7228.90 903.61 0.00 0.00 2209.80 1690.53 5123.72 00:16:45.955 0 00:16:45.955 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:45.955 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:45.955 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:45.955 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:45.955 | .driver_specific 00:16:45.955 | .nvme_error 00:16:45.955 | .status_code 00:16:45.955 | .command_transient_transport_error' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 467 > 0 )) 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79906 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79906 ']' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79906 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79906 00:16:46.219 killing process with pid 79906 00:16:46.219 Received shutdown signal, test time was about 2.000000 seconds 00:16:46.219 00:16:46.219 Latency(us) 00:16:46.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.219 =================================================================================================================== 00:16:46.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79906' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79906 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79906 00:16:46.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79955 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79955 /var/tmp/bperf.sock 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79955 ']' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.219 16:20:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:46.477 [2024-07-12 16:20:29.972983] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:46.477 [2024-07-12 16:20:29.973082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79955 ] 00:16:46.477 [2024-07-12 16:20:30.112244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.477 [2024-07-12 16:20:30.172390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.478 [2024-07-12 16:20:30.201279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:46.735 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.735 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:46.735 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:46.735 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:46.993 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:46.993 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.993 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:46.993 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.993 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.993 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:47.251 nvme0n1 00:16:47.251 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:47.251 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.251 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:47.251 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.251 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:47.251 16:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:47.251 Running I/O for 2 seconds... 00:16:47.510 [2024-07-12 16:20:31.000337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fef90 00:16:47.510 [2024-07-12 16:20:31.003076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.510 [2024-07-12 16:20:31.003118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.510 [2024-07-12 16:20:31.017404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190feb58 00:16:47.510 [2024-07-12 16:20:31.020153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.510 [2024-07-12 16:20:31.020204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.510 [2024-07-12 16:20:31.035011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fe2e8 00:16:47.510 [2024-07-12 16:20:31.037710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.510 [2024-07-12 16:20:31.037747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.510 [2024-07-12 16:20:31.051165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fda78 00:16:47.510 [2024-07-12 16:20:31.053714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.510 [2024-07-12 16:20:31.053762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.510 [2024-07-12 16:20:31.066404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fd208 00:16:47.510 [2024-07-12 16:20:31.068821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.510 [2024-07-12 16:20:31.068882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.510 [2024-07-12 16:20:31.082113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fc998 00:16:47.510 [2024-07-12 16:20:31.084373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.510 [2024-07-12 16:20:31.084419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.097037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fc128 00:16:47.511 [2024-07-12 16:20:31.099339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.099385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.112151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fb8b8 00:16:47.511 [2024-07-12 16:20:31.114484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.114530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.126804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fb048 00:16:47.511 [2024-07-12 16:20:31.129037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.129081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.140986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fa7d8 00:16:47.511 [2024-07-12 16:20:31.143151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.143180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.155206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f9f68 00:16:47.511 [2024-07-12 16:20:31.157383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.157427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.170986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f96f8 00:16:47.511 [2024-07-12 16:20:31.173168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.173212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.186432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f8e88 00:16:47.511 [2024-07-12 16:20:31.189105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.189148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.202367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f8618 00:16:47.511 [2024-07-12 16:20:31.204515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.204546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.217232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f7da8 00:16:47.511 [2024-07-12 16:20:31.219792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.219839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.511 [2024-07-12 16:20:31.232154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f7538 00:16:47.511 [2024-07-12 16:20:31.234407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.511 [2024-07-12 16:20:31.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.247388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f6cc8 00:16:47.770 [2024-07-12 16:20:31.249652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.249696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.262713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f6458 00:16:47.770 [2024-07-12 16:20:31.265295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.265355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.278144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f5be8 00:16:47.770 [2024-07-12 16:20:31.280410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.280440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.294486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f5378 00:16:47.770 [2024-07-12 16:20:31.296856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.296894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.310698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f4b08 00:16:47.770 [2024-07-12 16:20:31.312913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.312951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.325775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f4298 00:16:47.770 [2024-07-12 16:20:31.327741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.327785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.340242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f3a28 00:16:47.770 [2024-07-12 16:20:31.342219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.342264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.354761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f31b8 00:16:47.770 [2024-07-12 16:20:31.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.357032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.369326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f2948 00:16:47.770 [2024-07-12 16:20:31.371223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.371266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.383423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f20d8 00:16:47.770 [2024-07-12 16:20:31.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.398008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f1868 00:16:47.770 [2024-07-12 16:20:31.400077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.400107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.412883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f0ff8 00:16:47.770 [2024-07-12 16:20:31.414760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.414806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.428075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f0788 00:16:47.770 [2024-07-12 16:20:31.430210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.430277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.444045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eff18 00:16:47.770 [2024-07-12 16:20:31.445954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.446001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.458843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ef6a8 00:16:47.770 [2024-07-12 16:20:31.460718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.460752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.473305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eee38 00:16:47.770 [2024-07-12 16:20:31.475092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.770 [2024-07-12 16:20:31.475136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.770 [2024-07-12 16:20:31.487656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ee5c8 00:16:47.771 [2024-07-12 16:20:31.489564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.771 [2024-07-12 16:20:31.489609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.503575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190edd58 00:16:48.029 [2024-07-12 16:20:31.505449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.505512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.518370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ed4e8 00:16:48.029 [2024-07-12 16:20:31.520167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.520213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.533143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ecc78 00:16:48.029 [2024-07-12 16:20:31.534964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.534994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.547619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ec408 00:16:48.029 [2024-07-12 16:20:31.549464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.549507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.562046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ebb98 00:16:48.029 [2024-07-12 16:20:31.563752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.563797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.576352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eb328 00:16:48.029 [2024-07-12 16:20:31.578142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.578185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.590820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eaab8 00:16:48.029 [2024-07-12 16:20:31.592533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.592566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:48.029 [2024-07-12 16:20:31.605265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ea248 00:16:48.029 [2024-07-12 16:20:31.606954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.029 [2024-07-12 16:20:31.606989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.619519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e99d8 00:16:48.030 [2024-07-12 16:20:31.621336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.621381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.634426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e9168 00:16:48.030 [2024-07-12 16:20:31.636073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.636118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.649057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e88f8 00:16:48.030 [2024-07-12 16:20:31.650729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.650775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.664742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e8088 00:16:48.030 [2024-07-12 16:20:31.666752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.666802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.680904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e7818 00:16:48.030 [2024-07-12 16:20:31.682616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.682663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.696900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e6fa8 00:16:48.030 [2024-07-12 16:20:31.698500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.698545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.711202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e6738 00:16:48.030 [2024-07-12 16:20:31.712885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.712938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.725325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e5ec8 00:16:48.030 [2024-07-12 16:20:31.726799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.726843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.739251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e5658 00:16:48.030 [2024-07-12 16:20:31.740859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.740913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:48.030 [2024-07-12 16:20:31.753520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e4de8 00:16:48.030 [2024-07-12 16:20:31.755167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.030 [2024-07-12 16:20:31.755198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:48.288 [2024-07-12 16:20:31.768594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e4578 00:16:48.288 [2024-07-12 16:20:31.770352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.288 [2024-07-12 16:20:31.770418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:48.288 [2024-07-12 16:20:31.783777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e3d08 00:16:48.288 [2024-07-12 16:20:31.785257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.288 [2024-07-12 16:20:31.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:48.288 [2024-07-12 16:20:31.798122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e3498 00:16:48.288 [2024-07-12 16:20:31.799554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.799599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.812028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e2c28 00:16:48.289 [2024-07-12 16:20:31.813493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.813538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.825880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e23b8 00:16:48.289 [2024-07-12 16:20:31.827268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.827328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.839945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e1b48 00:16:48.289 [2024-07-12 16:20:31.841414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.841458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.854211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e12d8 00:16:48.289 [2024-07-12 16:20:31.855550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.855593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.869801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e0a68 00:16:48.289 [2024-07-12 16:20:31.871196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.871227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.885390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e01f8 00:16:48.289 [2024-07-12 16:20:31.886906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.886943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.901373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190df988 00:16:48.289 [2024-07-12 16:20:31.902921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.902974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.917113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190df118 00:16:48.289 [2024-07-12 16:20:31.918447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.918493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.932153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190de8a8 00:16:48.289 [2024-07-12 16:20:31.933561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.933605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.946622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190de038 00:16:48.289 [2024-07-12 16:20:31.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.947991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.969024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190de038 00:16:48.289 [2024-07-12 16:20:31.971334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.971383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.983993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190de8a8 00:16:48.289 [2024-07-12 16:20:31.986335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:31.986381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:31.998785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190df118 00:16:48.289 [2024-07-12 16:20:32.001203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.289 [2024-07-12 16:20:32.001249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:48.289 [2024-07-12 16:20:32.013680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190df988 00:16:48.548 [2024-07-12 16:20:32.016217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.016279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.029085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e01f8 00:16:48.548 [2024-07-12 16:20:32.031342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.031388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.043736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e0a68 00:16:48.548 [2024-07-12 16:20:32.046440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.046486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.060392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e12d8 00:16:48.548 [2024-07-12 16:20:32.063093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.063131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.076605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e1b48 00:16:48.548 [2024-07-12 16:20:32.079080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.079111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.092141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e23b8 00:16:48.548 [2024-07-12 16:20:32.094726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.094775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.107253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e2c28 00:16:48.548 [2024-07-12 16:20:32.109770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.109803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.122948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e3498 00:16:48.548 [2024-07-12 16:20:32.125355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.125399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.138551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e3d08 00:16:48.548 [2024-07-12 16:20:32.140901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.140953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.153833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e4578 00:16:48.548 [2024-07-12 16:20:32.155967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.156012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.169135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e4de8 00:16:48.548 [2024-07-12 16:20:32.171349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.171395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.185035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e5658 00:16:48.548 [2024-07-12 16:20:32.187233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.187278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.200827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e5ec8 00:16:48.548 [2024-07-12 16:20:32.203058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.203104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.216276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e6738 00:16:48.548 [2024-07-12 16:20:32.218424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.218470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.231096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e6fa8 00:16:48.548 [2024-07-12 16:20:32.233338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.233382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.245960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e7818 00:16:48.548 [2024-07-12 16:20:32.248176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.248208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:48.548 [2024-07-12 16:20:32.261129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e8088 00:16:48.548 [2024-07-12 16:20:32.263196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.548 [2024-07-12 16:20:32.263256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.276654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e88f8 00:16:48.807 [2024-07-12 16:20:32.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.279031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.291821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e9168 00:16:48.807 [2024-07-12 16:20:32.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.293971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.307149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190e99d8 00:16:48.807 [2024-07-12 16:20:32.309393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.309437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.323488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ea248 00:16:48.807 [2024-07-12 16:20:32.325717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.325762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.340315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eaab8 00:16:48.807 [2024-07-12 16:20:32.342547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.342596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.356652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eb328 00:16:48.807 [2024-07-12 16:20:32.358718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.358764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.371549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ebb98 00:16:48.807 [2024-07-12 16:20:32.373629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.373674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.386371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ec408 00:16:48.807 [2024-07-12 16:20:32.388257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.388301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.400654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ecc78 00:16:48.807 [2024-07-12 16:20:32.402582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.402625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.414946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ed4e8 00:16:48.807 [2024-07-12 16:20:32.416862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.416898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.429194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190edd58 00:16:48.807 [2024-07-12 16:20:32.430983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.431026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.443362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ee5c8 00:16:48.807 [2024-07-12 16:20:32.445286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.445331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.457710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eee38 00:16:48.807 [2024-07-12 16:20:32.459596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.459639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.472049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190ef6a8 00:16:48.807 [2024-07-12 16:20:32.473833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.473876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.486252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190eff18 00:16:48.807 [2024-07-12 16:20:32.488125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.488170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.501293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f0788 00:16:48.807 [2024-07-12 16:20:32.503075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.503121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.515580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f0ff8 00:16:48.807 [2024-07-12 16:20:32.517413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.517455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:48.807 [2024-07-12 16:20:32.530132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f1868 00:16:48.807 [2024-07-12 16:20:32.532037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.807 [2024-07-12 16:20:32.532084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.546339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f20d8 00:16:49.066 [2024-07-12 16:20:32.548168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.548215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.561250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f2948 00:16:49.066 [2024-07-12 16:20:32.562962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.563007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.576760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f31b8 00:16:49.066 [2024-07-12 16:20:32.578539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.578586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.591539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f3a28 00:16:49.066 [2024-07-12 16:20:32.593296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.593354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.606604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f4298 00:16:49.066 [2024-07-12 16:20:32.608504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.608540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.621909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f4b08 00:16:49.066 [2024-07-12 16:20:32.623574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.623621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.636515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f5378 00:16:49.066 [2024-07-12 16:20:32.638373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.638436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.652243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f5be8 00:16:49.066 [2024-07-12 16:20:32.653942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.667454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f6458 00:16:49.066 [2024-07-12 16:20:32.669206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.066 [2024-07-12 16:20:32.669252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:49.066 [2024-07-12 16:20:32.682094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f6cc8 00:16:49.067 [2024-07-12 16:20:32.683647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.696396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f7538 00:16:49.067 [2024-07-12 16:20:32.697979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.698023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.710666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f7da8 00:16:49.067 [2024-07-12 16:20:32.712234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.712278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.724766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f8618 00:16:49.067 [2024-07-12 16:20:32.726302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.726346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.738995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f8e88 00:16:49.067 [2024-07-12 16:20:32.740452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.740513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.753348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f96f8 00:16:49.067 [2024-07-12 16:20:32.754780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.754824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.769113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190f9f68 00:16:49.067 [2024-07-12 16:20:32.770655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.770702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:49.067 [2024-07-12 16:20:32.783950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fa7d8 00:16:49.067 [2024-07-12 16:20:32.785425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.067 [2024-07-12 16:20:32.785470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.799696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fb048 00:16:49.325 [2024-07-12 16:20:32.801231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.801277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.814277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fb8b8 00:16:49.325 [2024-07-12 16:20:32.815643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.815688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.828871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fc128 00:16:49.325 [2024-07-12 16:20:32.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.830315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.843251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fc998 00:16:49.325 [2024-07-12 16:20:32.844637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.844684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.857701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fd208 00:16:49.325 [2024-07-12 16:20:32.859099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.859143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.872184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fda78 00:16:49.325 [2024-07-12 16:20:32.873589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.873634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.887289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fe2e8 00:16:49.325 [2024-07-12 16:20:32.888703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.888738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.902497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190feb58 00:16:49.325 [2024-07-12 16:20:32.903791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.903837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.926280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fef90 00:16:49.325 [2024-07-12 16:20:32.928902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.928955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.941967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190feb58 00:16:49.325 [2024-07-12 16:20:32.944393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.944437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.957157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fe2e8 00:16:49.325 [2024-07-12 16:20:32.959469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.959514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:49.325 [2024-07-12 16:20:32.971522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf48c0) with pdu=0x2000190fda78 00:16:49.325 [2024-07-12 16:20:32.973962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.325 [2024-07-12 16:20:32.974006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:49.325 00:16:49.325 Latency(us) 00:16:49.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.325 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.325 nvme0n1 : 2.00 16751.37 65.44 0.00 0.00 7634.39 6642.97 31457.28 00:16:49.326 =================================================================================================================== 00:16:49.326 Total : 16751.37 65.44 0.00 0.00 7634.39 6642.97 31457.28 00:16:49.326 0 00:16:49.326 16:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:49.326 16:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:49.326 16:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:49.326 16:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:49.326 | .driver_specific 00:16:49.326 | .nvme_error 00:16:49.326 | .status_code 00:16:49.326 | .command_transient_transport_error' 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79955 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79955 ']' 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79955 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79955 00:16:49.584 killing process with pid 79955 00:16:49.584 Received shutdown signal, test time was about 2.000000 seconds 00:16:49.584 00:16:49.584 Latency(us) 00:16:49.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.584 =================================================================================================================== 00:16:49.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79955' 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79955 00:16:49.584 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79955 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80008 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80008 /var/tmp/bperf.sock 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80008 ']' 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:49.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.843 16:20:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:49.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:49.843 Zero copy mechanism will not be used. 00:16:49.843 [2024-07-12 16:20:33.440543] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:49.843 [2024-07-12 16:20:33.440630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80008 ] 00:16:50.101 [2024-07-12 16:20:33.572826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.101 [2024-07-12 16:20:33.630211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.101 [2024-07-12 16:20:33.659408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.036 16:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.295 nvme0n1 00:16:51.295 16:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:51.295 16:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.295 16:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.295 16:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.295 16:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:51.295 16:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:51.554 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.554 Zero copy mechanism will not be used. 00:16:51.554 Running I/O for 2 seconds... 00:16:51.554 [2024-07-12 16:20:35.154036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.554 [2024-07-12 16:20:35.154361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.554 [2024-07-12 16:20:35.154392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.554 [2024-07-12 16:20:35.159736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.554 [2024-07-12 16:20:35.160091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.554 [2024-07-12 16:20:35.160145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.554 [2024-07-12 16:20:35.165302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.554 [2024-07-12 16:20:35.165636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.554 [2024-07-12 16:20:35.165667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.554 [2024-07-12 16:20:35.170542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.554 [2024-07-12 16:20:35.170821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.554 [2024-07-12 16:20:35.170873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.175523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.175799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.175826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.180305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.180652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.185175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.185452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.185479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.189974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.190267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.190293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.194692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.194996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.195023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.199428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.199704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.199730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.204289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.204621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.204650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.209200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.209475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.209501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.214008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.214282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.214307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.218707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.218990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.219016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.223452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.223727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.223753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.228205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.228521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.228548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.233100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.233372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.233398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.237902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.238199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.238224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.242728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.243031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.243057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.247532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.247807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.247834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.252375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.252708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.252736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.257280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.257551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.257577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.262059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.262330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.266972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.267254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.267279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.271843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.272157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.272183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.555 [2024-07-12 16:20:35.276754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.555 [2024-07-12 16:20:35.277103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.555 [2024-07-12 16:20:35.277131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.815 [2024-07-12 16:20:35.282196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.815 [2024-07-12 16:20:35.282513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.815 [2024-07-12 16:20:35.282540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.815 [2024-07-12 16:20:35.287356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.815 [2024-07-12 16:20:35.287628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.815 [2024-07-12 16:20:35.287654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.815 [2024-07-12 16:20:35.292155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.815 [2024-07-12 16:20:35.292442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.815 [2024-07-12 16:20:35.292495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.297089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.297367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.297393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.301950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.302232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.302258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.306808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.307101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.307128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.311545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.311818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.311844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.316220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.316542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.316571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.321047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.321318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.325873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.326144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.326169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.330566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.330838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.330874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.335395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.335666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.335692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.340217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.340573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.340600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.345116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.345387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.345412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.349892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.350163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.350188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.354673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.354972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.354998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.359538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.359830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.359855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.364676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.365022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.365050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.370075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.370437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.370467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.375568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.375901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.375947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.381243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.381553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.381580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.386584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.386858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.386928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.391939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.392323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.392351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.397326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.397596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.397623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.402324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.402612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.402639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.407298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.407584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.407605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.412207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.412529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.412558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.417223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.417511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.417537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.422685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.423028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.423058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.427836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.427944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.427987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.432992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.433067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.433090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.437818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.816 [2024-07-12 16:20:35.437899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.816 [2024-07-12 16:20:35.437932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.816 [2024-07-12 16:20:35.442647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.442723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.442744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.447460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.447552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.447573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.452169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.452245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.452265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.457043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.457117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.457137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.461664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.461749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.461769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.466413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.466497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.466517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.471418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.471490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.471511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.476162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.476237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.476257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.480985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.481061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.481082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.485605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.485678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.485703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.490461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.490532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.490554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.495153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.495229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.495250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.499750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.499825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.499846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.504560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.504621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.504643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.509370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.509441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.509461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.514091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.514166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.514186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.519524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.519621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.519646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.524520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.524589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.524613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.529481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.529573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.529594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.534377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.534472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.817 [2024-07-12 16:20:35.539612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:51.817 [2024-07-12 16:20:35.539689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.817 [2024-07-12 16:20:35.539711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.545032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.545103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.545124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.550066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.550149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.550171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.554985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.555083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.559813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.559914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.559937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.564755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.564867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.564889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.569599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.569674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.569695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.574956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.575063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.575087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.580007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.580099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.580122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.584977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.585051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.585072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.589836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.589951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.589971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.594707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.594769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.594791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.600042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.600138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.604727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.604835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.604857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.609548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.609667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.609689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.614378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.614464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.614485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.619159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.619232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.619254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.623847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.623957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.623978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.628459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.628563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.078 [2024-07-12 16:20:35.628586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.078 [2024-07-12 16:20:35.633762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.078 [2024-07-12 16:20:35.633877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.633907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.639260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.639353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.639375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.644241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.644317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.644340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.649405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.649481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.649502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.654317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.654391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.654412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.659083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.659157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.659178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.663854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.663942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.663963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.668587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.668663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.668685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.673581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.673660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.679018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.679099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.679123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.684081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.684155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.684177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.689463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.689546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.689568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.694756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.694840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.694862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.699993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.700057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.700080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.705283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.705359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.705381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.710601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.710675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.710696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.716023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.716103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.716126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.721360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.721434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.721456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.726446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.726533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.726554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.731526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.731603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.731624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.736426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.736531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.736555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.741777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.741843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.741873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.746948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.747032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.747055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.751999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.752083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.752105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.757129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.757204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.757228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.762571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.762681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.767688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.767764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.767785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.773367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.773450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.773473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.778386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.778480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.079 [2024-07-12 16:20:35.778535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.079 [2024-07-12 16:20:35.783474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.079 [2024-07-12 16:20:35.783548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.080 [2024-07-12 16:20:35.783570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.080 [2024-07-12 16:20:35.788411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.080 [2024-07-12 16:20:35.788529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.080 [2024-07-12 16:20:35.788552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.080 [2024-07-12 16:20:35.793533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.080 [2024-07-12 16:20:35.793616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.080 [2024-07-12 16:20:35.793637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.080 [2024-07-12 16:20:35.798700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.080 [2024-07-12 16:20:35.798787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.080 [2024-07-12 16:20:35.798808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.804108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.804214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.804253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.809074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.809178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.809200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.814212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.814285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.814306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.818950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.819025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.819047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.823678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.823762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.823783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.828767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.828903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.828925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.833569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.833652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.833673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.838492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.838567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.838588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.843431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.843510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.843533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.848373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.848480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.848503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.853366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.853426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.853448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.858363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.858471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.858494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.863278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.863352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.863374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.868113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.868200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.868222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.873407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.873491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.873512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.878364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.878438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.878459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.883234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.883337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.883357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.887909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.887968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.887988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.892665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.340 [2024-07-12 16:20:35.892755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.340 [2024-07-12 16:20:35.892776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.340 [2024-07-12 16:20:35.897476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.897550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.902237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.902328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.902348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.906877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.906962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.906983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.911500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.911579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.911600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.916367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.916483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.916507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.921152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.921233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.921254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.925856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.925958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.925979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.930541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.930615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.930635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.935381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.935455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.935476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.941067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.941140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.941163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.946050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.946126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.946148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.950937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.951025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.955727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.955830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.955850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.960558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.960650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.960672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.965393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.965478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.965499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.970428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.970525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.970546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.975732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.975797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.981231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.981307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.986485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.986592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.986632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.991650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.991743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.991765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:35.996728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:35.996794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:35.996818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.001687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.001765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.001787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.006714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.006799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.006821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.011454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.011537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.011557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.016210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.016302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.016323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.020989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.021064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.021085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.025713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.025791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.025812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.030394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.030475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.030495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.341 [2024-07-12 16:20:36.035404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.341 [2024-07-12 16:20:36.035464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.341 [2024-07-12 16:20:36.035485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.342 [2024-07-12 16:20:36.040855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.342 [2024-07-12 16:20:36.040948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.342 [2024-07-12 16:20:36.040982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.342 [2024-07-12 16:20:36.045680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.342 [2024-07-12 16:20:36.045777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.342 [2024-07-12 16:20:36.045799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.342 [2024-07-12 16:20:36.051116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.342 [2024-07-12 16:20:36.051199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.342 [2024-07-12 16:20:36.051221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.342 [2024-07-12 16:20:36.056139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.342 [2024-07-12 16:20:36.056215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.342 [2024-07-12 16:20:36.056237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.342 [2024-07-12 16:20:36.061035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.342 [2024-07-12 16:20:36.061136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.342 [2024-07-12 16:20:36.061157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.066265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.066358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.066381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.071049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.071152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.071175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.076010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.076100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.076122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.081339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.081425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.081447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.086369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.086447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.086468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.091300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.091386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.091406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.096499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.096564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.096589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.101735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.101814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.101852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.106720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.106804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.106825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.111553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.111628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.111650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.116368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.116458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.601 [2024-07-12 16:20:36.121271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.601 [2024-07-12 16:20:36.121349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.601 [2024-07-12 16:20:36.121370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.126253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.126327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.126348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.131101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.131174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.131195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.135795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.135869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.135907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.140637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.140728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.140750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.145397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.145474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.145495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.150139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.150213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.150234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.154881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.154960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.154981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.159678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.159750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.159771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.164433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.164539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.164561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.169801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.169919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.169941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.174938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.175000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.180209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.180283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.180305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.185703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.185781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.185805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.191266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.191329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.191353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.196810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.196939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.196974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.201973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.202057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.202078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.207016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.207101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.207122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.211795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.211883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.211904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.216631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.216727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.216750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.221502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.221584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.221605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.226135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.226210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.226230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.230737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.230823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.230843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.235500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.235598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.235618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.240303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.240376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.240396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.245100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.245185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.245207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.250308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.250374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.250398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.255651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.255736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.255758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.260584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.260679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.602 [2024-07-12 16:20:36.265448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.602 [2024-07-12 16:20:36.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.602 [2024-07-12 16:20:36.265568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.270303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.270382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.270402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.275261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.275346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.275368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.280313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.280394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.280428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.285143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.285227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.290001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.290074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.290095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.294989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.295070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.295091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.299833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.299956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.299977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.304679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.304767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.304802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.309460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.309544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.309564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.314172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.314244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.314264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.318892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.318968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.318988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.603 [2024-07-12 16:20:36.323576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.603 [2024-07-12 16:20:36.323654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.603 [2024-07-12 16:20:36.323677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.328710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.328819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.328856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.333811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.333884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.333906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.338513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.338595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.338616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.343282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.343362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.343383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.347989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.348072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.348093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.352759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.352849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.352895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.357615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.357697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.357718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.362402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.362487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.362507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.367130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.863 [2024-07-12 16:20:36.367213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.863 [2024-07-12 16:20:36.367234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.863 [2024-07-12 16:20:36.371755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.371837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.371858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.376441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.376556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.376578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.381231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.381320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.381341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.386443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.386520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.386541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.391683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.391759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.391781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.397065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.397133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.397157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.402557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.402631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.402652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.407850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.407958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.407997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.413199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.413308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.413329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.418240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.418354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.418374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.423404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.423486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.423506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.428131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.428211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.428232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.432860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.432952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.432972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.437523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.437604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.437625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.442403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.442487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.442507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.447295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.447369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.447390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.451977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.452058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.452078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.456731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.456807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.456843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.461466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.461549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.461569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.466233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.466334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.466355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.470953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.471038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.471058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.475680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.475758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.475778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.480440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.480546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.480569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.485272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.485353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.485374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.490006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.490080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.490102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.494754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.494828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.494849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.500066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.500172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.505435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.505521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.505544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.510331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.864 [2024-07-12 16:20:36.510406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.864 [2024-07-12 16:20:36.510427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.864 [2024-07-12 16:20:36.515453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.515556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.515578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.520382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.520482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.520506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.525322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.525396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.525417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.530203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.530279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.530300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.535031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.535119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.535142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.540416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.540541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.540565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.545608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.545701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.545722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.550481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.550569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.550590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.555388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.555468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.555489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.560365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.560434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.560479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.565329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.565422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.565443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.570248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.570328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.570348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.574945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.575027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.575047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.579683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.579756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.579777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.865 [2024-07-12 16:20:36.584490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:52.865 [2024-07-12 16:20:36.584575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.865 [2024-07-12 16:20:36.584596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.125 [2024-07-12 16:20:36.589581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.125 [2024-07-12 16:20:36.589659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.125 [2024-07-12 16:20:36.589681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.125 [2024-07-12 16:20:36.594593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.125 [2024-07-12 16:20:36.594678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.125 [2024-07-12 16:20:36.594716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.125 [2024-07-12 16:20:36.599505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.125 [2024-07-12 16:20:36.599575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.125 [2024-07-12 16:20:36.599596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.125 [2024-07-12 16:20:36.605114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.125 [2024-07-12 16:20:36.605215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.125 [2024-07-12 16:20:36.605237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.125 [2024-07-12 16:20:36.609993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.125 [2024-07-12 16:20:36.610071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.610092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.614920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.615007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.615027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.619830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.619951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.619974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.624479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.624554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.624576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.629298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.629377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.629398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.634697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.634782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.634804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.639649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.639733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.639753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.644625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.644708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.644732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.649428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.649502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.649523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.654339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.654419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.654440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.659145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.659228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.659249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.664012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.664095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.664116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.668908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.668995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.669016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.673629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.673724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.673745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.678437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.678517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.678537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.683106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.683200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.683220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.687831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.687942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.687964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.692564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.692643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.692667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.697299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.697376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.697397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.702098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.702179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.702200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.706819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.706900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.706922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.711543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.711623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.711644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.716280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.716354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.716374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.720988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.721076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.721097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.725696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.725789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.725809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.730428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.730527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.735228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.735300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.735321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.740092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.740176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.740198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.745030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.126 [2024-07-12 16:20:36.745111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.126 [2024-07-12 16:20:36.745132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.126 [2024-07-12 16:20:36.749673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.749755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.749775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.754456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.754543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.754564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.759188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.759271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.759292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.763808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.763908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.763929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.768610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.768684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.768706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.773461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.773541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.773561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.778186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.778250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.782816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.782946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.787458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.787554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.787574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.792149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.792233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.792254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.796910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.797004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.797025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.801581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.801679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.806273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.806356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.806377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.811028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.811103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.815617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.815700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.815721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.820361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.820443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.820504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.825130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.825223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.829737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.829809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.829830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.834448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.834521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.834542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.839201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.839316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.839337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.843992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.844073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.844093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.127 [2024-07-12 16:20:36.849159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.127 [2024-07-12 16:20:36.849229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.127 [2024-07-12 16:20:36.849273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.854570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.854658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.854682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.859542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.859635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.859658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.864378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.864502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.864527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.869360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.869446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.874295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.874410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.879699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.879794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.879815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.884956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.885052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.885074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.890178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.890264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.890287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.895552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.895675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.387 [2024-07-12 16:20:36.895698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.387 [2024-07-12 16:20:36.900929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.387 [2024-07-12 16:20:36.901030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.901054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.906106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.906185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.906206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.911084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.911160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.911182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.916037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.916155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.921228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.921317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.921338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.926046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.926148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.926169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.930853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.930958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.930979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.935752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.935838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.935859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.940654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.940732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.940755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.945541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.945619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.950384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.950459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.950480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.955283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.955359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.955380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.960190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.960282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.960302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.964954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.965028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.965049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.970091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.970184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.970205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.974831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.974914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.974936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.979637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.979731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.979751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.984770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.984876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.984897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.989980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.990077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.990099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:36.995235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:36.995323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:36.995346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.000877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.000965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.000989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.006116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.006201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.006222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.011367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.011441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.011462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.016690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.016799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.021827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.021932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.021953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.026713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.026807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.026830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.032193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.032276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.032301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.037836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.037926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.037950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.042783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.042887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.388 [2024-07-12 16:20:37.042920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.388 [2024-07-12 16:20:37.048077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.388 [2024-07-12 16:20:37.048164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.048185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.053078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.053168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.053189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.058018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.058100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.058123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.062979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.063054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.063075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.068037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.068111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.068133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.073448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.073559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.073580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.078461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.078558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.078580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.083287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.083360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.088182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.088268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.088288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.093046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.093142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.093197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.098427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.098520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.098540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.103273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.103365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.103386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.389 [2024-07-12 16:20:37.108704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.389 [2024-07-12 16:20:37.108784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.389 [2024-07-12 16:20:37.108807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.113793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.113903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.113927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.118771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.118874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.118897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.123511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.123604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.123626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.128206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.128299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.128319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.132922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.133006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.133027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.137637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.137723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.137743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.648 [2024-07-12 16:20:37.142472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf4c00) with pdu=0x2000190fef90 00:16:53.648 [2024-07-12 16:20:37.142549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.648 [2024-07-12 16:20:37.142600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.648 00:16:53.648 Latency(us) 00:16:53.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.648 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:53.648 nvme0n1 : 2.00 6224.03 778.00 0.00 0.00 2564.52 1876.71 11141.12 00:16:53.648 =================================================================================================================== 00:16:53.648 Total : 6224.03 778.00 0.00 0.00 2564.52 1876.71 11141.12 00:16:53.648 0 00:16:53.648 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:53.648 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:53.648 | .driver_specific 00:16:53.648 | .nvme_error 00:16:53.648 | .status_code 00:16:53.648 | .command_transient_transport_error' 00:16:53.648 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:53.648 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 402 > 0 )) 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80008 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80008 ']' 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80008 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80008 00:16:53.906 killing process with pid 80008 00:16:53.906 Received shutdown signal, test time was about 2.000000 seconds 00:16:53.906 00:16:53.906 Latency(us) 00:16:53.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.906 =================================================================================================================== 00:16:53.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80008' 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80008 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80008 00:16:53.906 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79829 00:16:53.907 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79829 ']' 00:16:53.907 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79829 00:16:53.907 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79829 00:16:54.165 killing process with pid 79829 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79829' 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79829 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79829 00:16:54.165 00:16:54.165 real 0m15.321s 00:16:54.165 user 0m29.816s 00:16:54.165 sys 0m4.401s 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:54.165 ************************************ 00:16:54.165 END TEST nvmf_digest_error 00:16:54.165 ************************************ 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.165 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.424 rmmod nvme_tcp 00:16:54.424 rmmod nvme_fabrics 00:16:54.424 rmmod nvme_keyring 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79829 ']' 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79829 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 79829 ']' 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 79829 00:16:54.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79829) - No such process 00:16:54.424 Process with pid 79829 is not found 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 79829 is not found' 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:54.424 00:16:54.424 real 0m33.022s 00:16:54.424 user 1m3.625s 00:16:54.424 sys 0m9.073s 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.424 16:20:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:54.424 ************************************ 00:16:54.424 END TEST nvmf_digest 00:16:54.424 ************************************ 00:16:54.424 16:20:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:54.424 16:20:38 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:16:54.424 16:20:38 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:16:54.425 16:20:38 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:54.425 16:20:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:54.425 16:20:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.425 16:20:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 ************************************ 00:16:54.425 START TEST nvmf_host_multipath 00:16:54.425 ************************************ 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:54.425 * Looking for test storage... 00:16:54.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.425 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:54.684 Cannot find device "nvmf_tgt_br" 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:54.684 Cannot find device "nvmf_tgt_br2" 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:54.684 Cannot find device "nvmf_tgt_br" 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:54.684 Cannot find device "nvmf_tgt_br2" 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.684 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:54.685 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:54.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:54.944 00:16:54.944 --- 10.0.0.2 ping statistics --- 00:16:54.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.944 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:54.944 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.944 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:54.944 00:16:54.944 --- 10.0.0.3 ping statistics --- 00:16:54.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.944 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:54.944 00:16:54.944 --- 10.0.0.1 ping statistics --- 00:16:54.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.944 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80271 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80271 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80271 ']' 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.944 16:20:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:54.944 [2024-07-12 16:20:38.555464] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:16:54.944 [2024-07-12 16:20:38.555546] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.203 [2024-07-12 16:20:38.696763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:55.203 [2024-07-12 16:20:38.766805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.203 [2024-07-12 16:20:38.766879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.203 [2024-07-12 16:20:38.766894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.203 [2024-07-12 16:20:38.766904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.203 [2024-07-12 16:20:38.766913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.203 [2024-07-12 16:20:38.767074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.203 [2024-07-12 16:20:38.767088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.203 [2024-07-12 16:20:38.801158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80271 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:56.150 [2024-07-12 16:20:39.830873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.150 16:20:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:56.428 Malloc0 00:16:56.428 16:20:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:56.688 16:20:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:56.947 16:20:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.206 [2024-07-12 16:20:40.831794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.206 16:20:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:57.464 [2024-07-12 16:20:41.043926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:57.464 16:20:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80321 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80321 /var/tmp/bdevperf.sock 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80321 ']' 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.465 16:20:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:58.400 16:20:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.400 16:20:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:16:58.400 16:20:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:58.658 16:20:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:58.916 Nvme0n1 00:16:58.916 16:20:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:59.175 Nvme0n1 00:16:59.433 16:20:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:59.433 16:20:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:00.367 16:20:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:00.367 16:20:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:00.625 16:20:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:00.884 16:20:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:00.884 16:20:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:00.884 16:20:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80366 00:17:00.884 16:20:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.444 Attaching 4 probes... 00:17:07.444 @path[10.0.0.2, 4421]: 19546 00:17:07.444 @path[10.0.0.2, 4421]: 19872 00:17:07.444 @path[10.0.0.2, 4421]: 18508 00:17:07.444 @path[10.0.0.2, 4421]: 18189 00:17:07.444 @path[10.0.0.2, 4421]: 18466 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80366 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:07.444 16:20:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:07.444 16:20:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:07.702 16:20:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:07.702 16:20:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80483 00:17:07.702 16:20:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:07.702 16:20:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.262 Attaching 4 probes... 00:17:14.262 @path[10.0.0.2, 4420]: 18057 00:17:14.262 @path[10.0.0.2, 4420]: 18189 00:17:14.262 @path[10.0.0.2, 4420]: 18432 00:17:14.262 @path[10.0.0.2, 4420]: 18005 00:17:14.262 @path[10.0.0.2, 4420]: 20661 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80483 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:14.262 16:20:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:14.520 16:20:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:14.520 16:20:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80597 00:17:14.520 16:20:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:14.520 16:20:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:21.125 Attaching 4 probes... 00:17:21.125 @path[10.0.0.2, 4421]: 15076 00:17:21.125 @path[10.0.0.2, 4421]: 19878 00:17:21.125 @path[10.0.0.2, 4421]: 18224 00:17:21.125 @path[10.0.0.2, 4421]: 17548 00:17:21.125 @path[10.0.0.2, 4421]: 17577 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80597 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:21.125 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:21.384 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:21.384 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80709 00:17:21.384 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:21.384 16:21:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:27.945 16:21:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:27.945 16:21:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:27.945 Attaching 4 probes... 00:17:27.945 00:17:27.945 00:17:27.945 00:17:27.945 00:17:27.945 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80709 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80826 00:17:27.945 16:21:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:34.511 Attaching 4 probes... 00:17:34.511 @path[10.0.0.2, 4421]: 19827 00:17:34.511 @path[10.0.0.2, 4421]: 20048 00:17:34.511 @path[10.0.0.2, 4421]: 20169 00:17:34.511 @path[10.0.0.2, 4421]: 17785 00:17:34.511 @path[10.0.0.2, 4421]: 17470 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80826 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:34.511 16:21:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:34.511 16:21:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:35.448 16:21:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:35.448 16:21:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80945 00:17:35.448 16:21:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:35.448 16:21:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:42.017 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.018 Attaching 4 probes... 00:17:42.018 @path[10.0.0.2, 4420]: 19554 00:17:42.018 @path[10.0.0.2, 4420]: 19912 00:17:42.018 @path[10.0.0.2, 4420]: 18741 00:17:42.018 @path[10.0.0.2, 4420]: 17808 00:17:42.018 @path[10.0.0.2, 4420]: 19232 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80945 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:42.018 [2024-07-12 16:21:25.561500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:42.018 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:42.276 16:21:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:48.835 16:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:48.835 16:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81124 00:17:48.835 16:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80271 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:48.835 16:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:55.404 16:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:55.404 16:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:55.404 Attaching 4 probes... 00:17:55.404 @path[10.0.0.2, 4421]: 17140 00:17:55.404 @path[10.0.0.2, 4421]: 17533 00:17:55.404 @path[10.0.0.2, 4421]: 17952 00:17:55.404 @path[10.0.0.2, 4421]: 17509 00:17:55.404 @path[10.0.0.2, 4421]: 17762 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81124 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80321 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80321 ']' 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80321 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80321 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.404 killing process with pid 80321 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80321' 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80321 00:17:55.404 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80321 00:17:55.404 Connection closed with partial response: 00:17:55.405 00:17:55.405 00:17:55.405 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80321 00:17:55.405 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:55.405 [2024-07-12 16:20:41.111257] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:17:55.405 [2024-07-12 16:20:41.111365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80321 ] 00:17:55.405 [2024-07-12 16:20:41.248367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.405 [2024-07-12 16:20:41.316924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.405 [2024-07-12 16:20:41.350617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:55.405 Running I/O for 90 seconds... 00:17:55.405 [2024-07-12 16:20:51.287115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.287819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.287855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.287891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.287944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.287966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.287981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.288031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.288066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.288124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.405 [2024-07-12 16:20:51.288160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.288201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.288236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.405 [2024-07-12 16:20:51.288270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:55.405 [2024-07-12 16:20:51.288290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.288774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.288810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.288846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.288894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.288947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.288968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.288983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.289018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.289071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.289114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.406 [2024-07-12 16:20:51.289418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.289452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.289487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.406 [2024-07-12 16:20:51.289533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:55.406 [2024-07-12 16:20:51.289559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.289574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.289625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.289662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.289698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.289735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.289777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.289814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.289851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.289887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.289924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.289970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.289986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.407 [2024-07-12 16:20:51.290378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.407 [2024-07-12 16:20:51.290788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:55.407 [2024-07-12 16:20:51.290809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.290824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.290846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.290862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.290884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.290899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.290934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.290950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.290972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.290993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.291547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.291560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.408 [2024-07-12 16:20:51.293125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.408 [2024-07-12 16:20:51.293718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.408 [2024-07-12 16:20:51.293732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:51.293754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:51.293769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:51.293791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:51.293809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:51.293839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:51.293854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:51.293876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:51.293891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:51.293931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:51.293951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.781975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.409 [2024-07-12 16:20:57.782924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.782987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.783000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.783018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.783030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.783048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.409 [2024-07-12 16:20:57.783060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:55.409 [2024-07-12 16:20:57.783078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.410 [2024-07-12 16:20:57.783957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.783974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.783986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:55.410 [2024-07-12 16:20:57.784004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.410 [2024-07-12 16:20:57.784016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.784447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.784971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.784991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.785003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.785034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.785063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.785094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.411 [2024-07-12 16:20:57.785123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.785153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.411 [2024-07-12 16:20:57.785183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:55.411 [2024-07-12 16:20:57.785208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.785889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.785972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.785990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.786022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.786053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.786084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.786115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.412 [2024-07-12 16:20:57.786794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.786840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.412 [2024-07-12 16:20:57.786880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.412 [2024-07-12 16:20:57.786920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:20:57.786936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:20:57.786963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:20:57.786977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:20:57.787003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:20:57.787016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:20:57.787043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:20:57.787056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:20:57.787083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:20:57.787104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:20:57.787147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:20:57.787166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.835971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.835985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.413 [2024-07-12 16:21:04.836608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.836653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.836690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.413 [2024-07-12 16:21:04.836727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:55.413 [2024-07-12 16:21:04.836749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.836764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.836786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.836800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.836822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.836837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.836858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.836873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.836909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.836926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.836948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.836963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.836999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.414 [2024-07-12 16:21:04.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.837957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.837995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.414 [2024-07-12 16:21:04.838012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:55.414 [2024-07-12 16:21:04.838034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.415 [2024-07-12 16:21:04.838802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.838960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.415 [2024-07-12 16:21:04.838986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.415 [2024-07-12 16:21:04.839005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.839746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.839972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.839988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.840024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.416 [2024-07-12 16:21:04.840090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.840138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.840170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.840202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.840235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.416 [2024-07-12 16:21:04.840267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:55.416 [2024-07-12 16:21:04.840286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:04.840299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.840321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:04.840334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:04.841068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:04.841449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:04.841469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.417 [2024-07-12 16:21:18.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.080821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.080866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.080895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.080940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.080970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.080988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.417 [2024-07-12 16:21:18.081318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.417 [2024-07-12 16:21:18.081331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.081811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.081975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.081988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.418 [2024-07-12 16:21:18.082281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.418 [2024-07-12 16:21:18.082308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.418 [2024-07-12 16:21:18.082321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.082952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.082970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.082983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.419 [2024-07-12 16:21:18.083287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.419 [2024-07-12 16:21:18.083482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.419 [2024-07-12 16:21:18.083495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.420 [2024-07-12 16:21:18.083507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.420 [2024-07-12 16:21:18.083532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.420 [2024-07-12 16:21:18.083557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.420 [2024-07-12 16:21:18.083581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.420 [2024-07-12 16:21:18.083607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.420 [2024-07-12 16:21:18.083640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2a30 is same with the state(5) to be set 00:17:55.420 [2024-07-12 16:21:18.083668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72328 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72336 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72344 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72704 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72712 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72720 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72728 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.083957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.083975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.083984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.083995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72736 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.084006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.084019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.084027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.084037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72744 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.084048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.084059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.084068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.084077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72752 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.084088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.084099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.084108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.084117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72760 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.084128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.420 [2024-07-12 16:21:18.084140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.420 [2024-07-12 16:21:18.084151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.420 [2024-07-12 16:21:18.084160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72768 len:8 PRP1 0x0 PRP2 0x0 00:17:55.420 [2024-07-12 16:21:18.084171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72776 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72784 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72792 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72800 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72808 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72816 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72824 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72832 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72840 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.421 [2024-07-12 16:21:18.084595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.421 [2024-07-12 16:21:18.084604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72848 len:8 PRP1 0x0 PRP2 0x0 00:17:55.421 [2024-07-12 16:21:18.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.084658] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15d2a30 was disconnected and freed. reset controller. 00:17:55.421 [2024-07-12 16:21:18.085665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.421 [2024-07-12 16:21:18.085735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.421 [2024-07-12 16:21:18.085788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.421 [2024-07-12 16:21:18.085817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7cb0 (9): Bad file descriptor 00:17:55.421 [2024-07-12 16:21:18.086237] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.421 [2024-07-12 16:21:18.086266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d7cb0 with addr=10.0.0.2, port=4421 00:17:55.421 [2024-07-12 16:21:18.086281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7cb0 is same with the state(5) to be set 00:17:55.421 [2024-07-12 16:21:18.086311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7cb0 (9): Bad file descriptor 00:17:55.421 [2024-07-12 16:21:18.086339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.421 [2024-07-12 16:21:18.086354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:55.421 [2024-07-12 16:21:18.086366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.421 [2024-07-12 16:21:18.086395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:55.421 [2024-07-12 16:21:18.086410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.421 [2024-07-12 16:21:28.161784] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.421 Received shutdown signal, test time was about 55.141774 seconds 00:17:55.421 00:17:55.421 Latency(us) 00:17:55.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.421 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.421 Verification LBA range: start 0x0 length 0x4000 00:17:55.421 Nvme0n1 : 55.14 8014.13 31.31 0.00 0.00 15939.05 484.07 7015926.69 00:17:55.421 =================================================================================================================== 00:17:55.421 Total : 8014.13 31.31 0.00 0.00 15939.05 484.07 7015926.69 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.421 rmmod nvme_tcp 00:17:55.421 rmmod nvme_fabrics 00:17:55.421 rmmod nvme_keyring 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80271 ']' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80271 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80271 ']' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80271 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80271 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.421 killing process with pid 80271 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80271' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80271 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80271 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:55.421 00:17:55.421 real 1m0.852s 00:17:55.421 user 2m48.413s 00:17:55.421 sys 0m18.301s 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.421 ************************************ 00:17:55.421 16:21:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:55.421 END TEST nvmf_host_multipath 00:17:55.421 ************************************ 00:17:55.421 16:21:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.421 16:21:38 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:55.421 16:21:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.421 16:21:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.421 16:21:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.421 ************************************ 00:17:55.421 START TEST nvmf_timeout 00:17:55.421 ************************************ 00:17:55.421 16:21:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:55.421 * Looking for test storage... 00:17:55.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:55.422 Cannot find device "nvmf_tgt_br" 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.422 Cannot find device "nvmf_tgt_br2" 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:55.422 Cannot find device "nvmf_tgt_br" 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:17:55.422 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:55.681 Cannot find device "nvmf_tgt_br2" 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:55.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:55.681 00:17:55.681 --- 10.0.0.2 ping statistics --- 00:17:55.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.681 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:55.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:55.681 00:17:55.681 --- 10.0.0.3 ping statistics --- 00:17:55.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.681 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:55.681 00:17:55.681 --- 10.0.0.1 ping statistics --- 00:17:55.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.681 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.681 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81423 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81423 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81423 ']' 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.940 16:21:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:55.940 [2024-07-12 16:21:39.471582] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:17:55.940 [2024-07-12 16:21:39.471685] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.940 [2024-07-12 16:21:39.613272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.198 [2024-07-12 16:21:39.682431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.198 [2024-07-12 16:21:39.682488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.198 [2024-07-12 16:21:39.682503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.198 [2024-07-12 16:21:39.682513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.198 [2024-07-12 16:21:39.682522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.198 [2024-07-12 16:21:39.682962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.198 [2024-07-12 16:21:39.682976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.198 [2024-07-12 16:21:39.719227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:56.764 16:21:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.764 16:21:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:17:56.764 16:21:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.765 16:21:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.765 16:21:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:56.765 16:21:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.765 16:21:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.765 16:21:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:57.022 [2024-07-12 16:21:40.602127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.022 16:21:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:57.281 Malloc0 00:17:57.281 16:21:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.539 16:21:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.797 16:21:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.797 [2024-07-12 16:21:41.493605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.797 16:21:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81473 00:17:57.797 16:21:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:57.797 16:21:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81473 /var/tmp/bdevperf.sock 00:17:57.797 16:21:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81473 ']' 00:17:57.797 16:21:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.798 16:21:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.798 16:21:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.798 16:21:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.798 16:21:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:58.055 [2024-07-12 16:21:41.553790] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:17:58.055 [2024-07-12 16:21:41.553909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81473 ] 00:17:58.055 [2024-07-12 16:21:41.679474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.055 [2024-07-12 16:21:41.735432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.055 [2024-07-12 16:21:41.762509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:58.987 16:21:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.987 16:21:42 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:17:58.987 16:21:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:58.987 16:21:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:59.245 NVMe0n1 00:17:59.245 16:21:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81498 00:17:59.245 16:21:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.245 16:21:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:17:59.504 Running I/O for 10 seconds... 00:18:00.478 16:21:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.478 [2024-07-12 16:21:44.164250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.478 [2024-07-12 16:21:44.164801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.164984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.164992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.165003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.165011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.165021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.165030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.165130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.165146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.165157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.478 [2024-07-12 16:21:44.165166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.478 [2024-07-12 16:21:44.165176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.165655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.165666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.166531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.166622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.166642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.166661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.166680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.166813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.167738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.167747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.168144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.168280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.479 [2024-07-12 16:21:44.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.479 [2024-07-12 16:21:44.168893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.479 [2024-07-12 16:21:44.168904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.168913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.168924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.168943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.168952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.168963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.168971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.480 [2024-07-12 16:21:44.169701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.480 [2024-07-12 16:21:44.169834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a6100 is same with the state(5) to be set 00:18:00.480 [2024-07-12 16:21:44.169856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.169863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.169871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85816 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.169929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.169945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.169954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.169978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.169986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.169995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.170004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.170011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.170019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.170027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.170037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.170045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.170053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.170064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.170073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.170080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.170088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.170096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.170105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.480 [2024-07-12 16:21:44.170112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.480 [2024-07-12 16:21:44.170120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:18:00.480 [2024-07-12 16:21:44.170128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.480 [2024-07-12 16:21:44.170137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.170966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.170974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.170981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.170990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85824 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85832 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85840 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85848 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85856 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85864 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85872 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85880 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85888 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85896 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85904 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85912 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85920 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85928 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85936 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:00.481 [2024-07-12 16:21:44.171512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:00.481 [2024-07-12 16:21:44.171520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85944 len:8 PRP1 0x0 PRP2 0x0 00:18:00.481 [2024-07-12 16:21:44.171528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.481 [2024-07-12 16:21:44.171567] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18a6100 was disconnected and freed. reset controller. 00:18:00.481 [2024-07-12 16:21:44.171800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:00.481 [2024-07-12 16:21:44.171871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855780 (9): Bad file descriptor 00:18:00.481 [2024-07-12 16:21:44.172331] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:00.481 [2024-07-12 16:21:44.172449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855780 with addr=10.0.0.2, port=4420 00:18:00.481 [2024-07-12 16:21:44.173021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855780 is same with the state(5) to be set 00:18:00.482 [2024-07-12 16:21:44.173471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855780 (9): Bad file descriptor 00:18:00.482 [2024-07-12 16:21:44.173953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:00.482 [2024-07-12 16:21:44.174388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:00.482 [2024-07-12 16:21:44.174809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:00.482 [2024-07-12 16:21:44.174842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:00.482 [2024-07-12 16:21:44.174855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:00.482 16:21:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:03.013 [2024-07-12 16:21:46.175024] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.013 [2024-07-12 16:21:46.175104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855780 with addr=10.0.0.2, port=4420 00:18:03.013 [2024-07-12 16:21:46.175120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855780 is same with the state(5) to be set 00:18:03.013 [2024-07-12 16:21:46.175145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855780 (9): Bad file descriptor 00:18:03.013 [2024-07-12 16:21:46.175164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:03.013 [2024-07-12 16:21:46.175174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:03.013 [2024-07-12 16:21:46.175185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.013 [2024-07-12 16:21:46.175221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:03.013 [2024-07-12 16:21:46.175247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:03.013 16:21:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:04.914 [2024-07-12 16:21:48.175424] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.914 [2024-07-12 16:21:48.175501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1855780 with addr=10.0.0.2, port=4420 00:18:04.914 [2024-07-12 16:21:48.175516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855780 is same with the state(5) to be set 00:18:04.914 [2024-07-12 16:21:48.175541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855780 (9): Bad file descriptor 00:18:04.914 [2024-07-12 16:21:48.175557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:04.914 [2024-07-12 16:21:48.175566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:04.915 [2024-07-12 16:21:48.175577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:04.915 [2024-07-12 16:21:48.175601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.915 [2024-07-12 16:21:48.175611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:06.816 [2024-07-12 16:21:50.175638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:06.816 [2024-07-12 16:21:50.175684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:06.816 [2024-07-12 16:21:50.175711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:06.816 [2024-07-12 16:21:50.175721] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:06.816 [2024-07-12 16:21:50.175744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:07.750 00:18:07.750 Latency(us) 00:18:07.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.750 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:07.750 Verification LBA range: start 0x0 length 0x4000 00:18:07.750 NVMe0n1 : 8.15 1311.66 5.12 15.71 0.00 96255.81 3157.64 7015926.69 00:18:07.750 =================================================================================================================== 00:18:07.750 Total : 1311.66 5.12 15.71 0.00 96255.81 3157.64 7015926.69 00:18:07.750 0 00:18:08.008 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:08.008 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:08.008 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:08.267 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:08.267 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:08.267 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:08.267 16:21:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 81498 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81473 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81473 ']' 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81473 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81473 00:18:08.526 killing process with pid 81473 00:18:08.526 Received shutdown signal, test time was about 9.168815 seconds 00:18:08.526 00:18:08.526 Latency(us) 00:18:08.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.526 =================================================================================================================== 00:18:08.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81473' 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81473 00:18:08.526 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81473 00:18:08.784 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.042 [2024-07-12 16:21:52.581988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81615 00:18:09.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81615 /var/tmp/bdevperf.sock 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81615 ']' 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.042 16:21:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:09.042 [2024-07-12 16:21:52.643824] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:18:09.042 [2024-07-12 16:21:52.643955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81615 ] 00:18:09.299 [2024-07-12 16:21:52.778688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.299 [2024-07-12 16:21:52.834283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.299 [2024-07-12 16:21:52.861649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:09.864 16:21:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.864 16:21:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:09.864 16:21:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:10.122 16:21:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:10.381 NVMe0n1 00:18:10.381 16:21:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81639 00:18:10.381 16:21:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:10.381 16:21:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:10.638 Running I/O for 10 seconds... 00:18:11.575 16:21:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.575 [2024-07-12 16:21:55.290854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-07-12 16:21:55.290908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.290946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.290955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.290966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.290974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.290984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.290992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-07-12 16:21:55.291614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-07-12 16:21:55.291623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.291968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.291995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.292450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.292461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-07-12 16:21:55.293778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-07-12 16:21:55.293786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.293973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.293984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.294009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.294028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.294048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.294068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-07-12 16:21:55.294403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-07-12 16:21:55.294422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dc100 is same with the state(5) to be set 00:18:11.577 [2024-07-12 16:21:55.294444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.577 [2024-07-12 16:21:55.294451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.577 [2024-07-12 16:21:55.294459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71832 len:8 PRP1 0x0 PRP2 0x0 00:18:11.577 [2024-07-12 16:21:55.294614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.294757] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21dc100 was disconnected and freed. reset controller. 00:18:11.577 [2024-07-12 16:21:55.295080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.577 [2024-07-12 16:21:55.295105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.295117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.577 [2024-07-12 16:21:55.295126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.295136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.577 [2024-07-12 16:21:55.295145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.295155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.577 [2024-07-12 16:21:55.295164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-07-12 16:21:55.295173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:11.577 [2024-07-12 16:21:55.295420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.577 [2024-07-12 16:21:55.295443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:11.577 [2024-07-12 16:21:55.295525] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.577 [2024-07-12 16:21:55.295545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b780 with addr=10.0.0.2, port=4420 00:18:11.577 [2024-07-12 16:21:55.295554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:11.577 [2024-07-12 16:21:55.295571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:11.577 [2024-07-12 16:21:55.295585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:11.577 [2024-07-12 16:21:55.295593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:11.577 [2024-07-12 16:21:55.295602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:11.577 [2024-07-12 16:21:55.295620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.577 [2024-07-12 16:21:55.295630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.836 16:21:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:12.768 [2024-07-12 16:21:56.295757] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.768 [2024-07-12 16:21:56.295842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b780 with addr=10.0.0.2, port=4420 00:18:12.768 [2024-07-12 16:21:56.295859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:12.768 [2024-07-12 16:21:56.295902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:12.768 [2024-07-12 16:21:56.295925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:12.768 [2024-07-12 16:21:56.295936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:12.768 [2024-07-12 16:21:56.295947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.768 [2024-07-12 16:21:56.295973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.768 [2024-07-12 16:21:56.295985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:12.768 16:21:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.025 [2024-07-12 16:21:56.552563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.025 16:21:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 81639 00:18:13.597 [2024-07-12 16:21:57.314316] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:21.734 00:18:21.734 Latency(us) 00:18:21.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.734 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.734 Verification LBA range: start 0x0 length 0x4000 00:18:21.734 NVMe0n1 : 10.01 7053.38 27.55 0.00 0.00 18110.64 1608.61 3019898.88 00:18:21.734 =================================================================================================================== 00:18:21.734 Total : 7053.38 27.55 0.00 0.00 18110.64 1608.61 3019898.88 00:18:21.734 0 00:18:21.734 16:22:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81743 00:18:21.734 16:22:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.734 16:22:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:21.734 Running I/O for 10 seconds... 00:18:21.734 16:22:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.734 [2024-07-12 16:22:05.429658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.734 [2024-07-12 16:22:05.429712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.734 [2024-07-12 16:22:05.429951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.734 [2024-07-12 16:22:05.429972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.429983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da0c0 is same with the state(5) to be set 00:18:21.734 [2024-07-12 16:22:05.429996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67872 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67888 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67912 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67920 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67928 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.734 [2024-07-12 16:22:05.430350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.734 [2024-07-12 16:22:05.430357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.734 [2024-07-12 16:22:05.430364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67936 len:8 PRP1 0x0 PRP2 0x0 00:18:21.734 [2024-07-12 16:22:05.430372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67944 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67952 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67960 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67968 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.430973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.430982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.430991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.430999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68096 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.431026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.431036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.431043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.431051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68104 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.431060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.431069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.431076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.431085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68112 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.431104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.431111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.431119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68120 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.431128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.735 [2024-07-12 16:22:05.431138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.735 [2024-07-12 16:22:05.431146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.735 [2024-07-12 16:22:05.431154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68128 len:8 PRP1 0x0 PRP2 0x0 00:18:21.735 [2024-07-12 16:22:05.431163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68136 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68144 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68152 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68160 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68168 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68176 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68184 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68192 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68200 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68208 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68216 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68224 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68232 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68240 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68248 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68256 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68264 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68272 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68280 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68288 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.431903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.431910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.431918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68296 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.431927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.432115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.432126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.432135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68304 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.432144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.432154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.432161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.432169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68312 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.432178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.432193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.432201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.432208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68320 len:8 PRP1 0x0 PRP2 0x0 00:18:21.736 [2024-07-12 16:22:05.432217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.736 [2024-07-12 16:22:05.432227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.736 [2024-07-12 16:22:05.432234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.736 [2024-07-12 16:22:05.432256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68328 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.432265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.432274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.432281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.432288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68336 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.432296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.432305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.432312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.432320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68344 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.432328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.432354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.432361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.432369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68352 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.432378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.432387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.432394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.432402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68360 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.432411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.432420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68368 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68376 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68384 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68392 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68400 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68408 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68416 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68424 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68432 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68440 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68448 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.433850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.433857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.433881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68456 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.434145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.434646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.434804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68464 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68472 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68480 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68488 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68496 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68504 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68512 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68520 len:8 PRP1 0x0 PRP2 0x0 00:18:21.737 [2024-07-12 16:22:05.435683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.737 [2024-07-12 16:22:05.435692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.737 [2024-07-12 16:22:05.435699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.737 [2024-07-12 16:22:05.435722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68528 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.435730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.435739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.435745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.435772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68536 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.435780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.435789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.435796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.435803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68544 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.446824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.446855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.446899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.446910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68552 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.446920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.446931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.446939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.446947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68560 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.446956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.446965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.446972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.446980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68568 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.446990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68576 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68584 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68592 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68600 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68608 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68616 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68624 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68632 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68640 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68648 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68656 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68664 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68672 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.447958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68680 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.447970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.738 [2024-07-12 16:22:05.447984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.738 [2024-07-12 16:22:05.447993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.738 [2024-07-12 16:22:05.448004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68688 len:8 PRP1 0x0 PRP2 0x0 00:18:21.738 [2024-07-12 16:22:05.448017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68696 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68704 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68712 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68720 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68728 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68736 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68744 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67752 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67760 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67768 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67776 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67784 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67792 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.739 [2024-07-12 16:22:05.448709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.739 [2024-07-12 16:22:05.448720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67800 len:8 PRP1 0x0 PRP2 0x0 00:18:21.739 [2024-07-12 16:22:05.448732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448790] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21da0c0 was disconnected and freed. reset controller. 00:18:21.739 [2024-07-12 16:22:05.448934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.739 [2024-07-12 16:22:05.448959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.448975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.739 [2024-07-12 16:22:05.448989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.449003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.739 [2024-07-12 16:22:05.449016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.449030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.739 [2024-07-12 16:22:05.449042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.739 [2024-07-12 16:22:05.449055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:21.739 [2024-07-12 16:22:05.449722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.739 [2024-07-12 16:22:05.449781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:21.739 [2024-07-12 16:22:05.449972] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.739 [2024-07-12 16:22:05.450003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b780 with addr=10.0.0.2, port=4420 00:18:21.739 [2024-07-12 16:22:05.450018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:21.739 [2024-07-12 16:22:05.450044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:21.739 [2024-07-12 16:22:05.450066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.739 [2024-07-12 16:22:05.450079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:21.739 [2024-07-12 16:22:05.450092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.739 [2024-07-12 16:22:05.450120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:21.739 [2024-07-12 16:22:05.450135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.739 16:22:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:23.115 [2024-07-12 16:22:06.450250] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.115 [2024-07-12 16:22:06.450316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b780 with addr=10.0.0.2, port=4420 00:18:23.115 [2024-07-12 16:22:06.450332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:23.115 [2024-07-12 16:22:06.450354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:23.115 [2024-07-12 16:22:06.450371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:23.115 [2024-07-12 16:22:06.450379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:23.115 [2024-07-12 16:22:06.450389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:23.115 [2024-07-12 16:22:06.450412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:23.115 [2024-07-12 16:22:06.450423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.050 [2024-07-12 16:22:07.450557] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.050 [2024-07-12 16:22:07.450642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b780 with addr=10.0.0.2, port=4420 00:18:24.050 [2024-07-12 16:22:07.450658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:24.050 [2024-07-12 16:22:07.450684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:24.050 [2024-07-12 16:22:07.450702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:24.050 [2024-07-12 16:22:07.450710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:24.050 [2024-07-12 16:22:07.450720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:24.050 [2024-07-12 16:22:07.450744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.050 [2024-07-12 16:22:07.450755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.984 [2024-07-12 16:22:08.453835] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.984 [2024-07-12 16:22:08.453971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b780 with addr=10.0.0.2, port=4420 00:18:24.984 [2024-07-12 16:22:08.453988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b780 is same with the state(5) to be set 00:18:24.984 [2024-07-12 16:22:08.454267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b780 (9): Bad file descriptor 00:18:24.984 [2024-07-12 16:22:08.454841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:24.984 [2024-07-12 16:22:08.454866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:24.984 [2024-07-12 16:22:08.454877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:24.984 16:22:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.984 [2024-07-12 16:22:08.459211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.984 [2024-07-12 16:22:08.459274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.984 [2024-07-12 16:22:08.691427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.242 16:22:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 81743 00:18:25.810 [2024-07-12 16:22:09.491173] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:31.076 00:18:31.076 Latency(us) 00:18:31.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.076 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.076 Verification LBA range: start 0x0 length 0x4000 00:18:31.076 NVMe0n1 : 10.01 5678.24 22.18 3772.71 0.00 13515.81 588.33 3035150.89 00:18:31.076 =================================================================================================================== 00:18:31.076 Total : 5678.24 22.18 3772.71 0.00 13515.81 0.00 3035150.89 00:18:31.076 0 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81615 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81615 ']' 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81615 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81615 00:18:31.076 killing process with pid 81615 00:18:31.076 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.076 00:18:31.076 Latency(us) 00:18:31.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.076 =================================================================================================================== 00:18:31.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.076 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81615' 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81615 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81615 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81858 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81858 /var/tmp/bdevperf.sock 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81858 ']' 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:31.077 [2024-07-12 16:22:14.529712] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:18:31.077 [2024-07-12 16:22:14.529984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81858 ] 00:18:31.077 [2024-07-12 16:22:14.661721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.077 [2024-07-12 16:22:14.714374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.077 [2024-07-12 16:22:14.741622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81861 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:31.077 16:22:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:31.335 16:22:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:31.593 NVMe0n1 00:18:31.851 16:22:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.851 16:22:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81898 00:18:31.851 16:22:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:31.851 Running I/O for 10 seconds... 00:18:32.785 16:22:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.046 [2024-07-12 16:22:16.561384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.561867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.561878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.046 [2024-07-12 16:22:16.562757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.046 [2024-07-12 16:22:16.562768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.562987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.562999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.563780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.563791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.047 [2024-07-12 16:22:16.564326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.047 [2024-07-12 16:22:16.564335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.564986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.564998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.048 [2024-07-12 16:22:16.565697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.048 [2024-07-12 16:22:16.565709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.049 [2024-07-12 16:22:16.565718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.565730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.049 [2024-07-12 16:22:16.565739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.565750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff0b0 is same with the state(5) to be set 00:18:33.049 [2024-07-12 16:22:16.565762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.049 [2024-07-12 16:22:16.565770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.049 [2024-07-12 16:22:16.565778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46648 len:8 PRP1 0x0 PRP2 0x0 00:18:33.049 [2024-07-12 16:22:16.565787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.565830] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dff0b0 was disconnected and freed. reset controller. 00:18:33.049 [2024-07-12 16:22:16.565940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.049 [2024-07-12 16:22:16.565959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.565970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.049 [2024-07-12 16:22:16.565979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.565989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.049 [2024-07-12 16:22:16.565999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.566015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.049 [2024-07-12 16:22:16.566024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.049 [2024-07-12 16:22:16.566033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf8f0 is same with the state(5) to be set 00:18:33.049 [2024-07-12 16:22:16.566286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.049 [2024-07-12 16:22:16.566310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf8f0 (9): Bad file descriptor 00:18:33.049 [2024-07-12 16:22:16.566410] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.049 [2024-07-12 16:22:16.566431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf8f0 with addr=10.0.0.2, port=4420 00:18:33.049 [2024-07-12 16:22:16.566443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf8f0 is same with the state(5) to be set 00:18:33.049 [2024-07-12 16:22:16.566461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf8f0 (9): Bad file descriptor 00:18:33.049 [2024-07-12 16:22:16.566476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.049 [2024-07-12 16:22:16.566486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:33.049 [2024-07-12 16:22:16.566497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.049 [2024-07-12 16:22:16.566517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:33.049 [2024-07-12 16:22:16.566527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.049 16:22:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 81898 00:18:34.950 [2024-07-12 16:22:18.566766] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.950 [2024-07-12 16:22:18.566831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf8f0 with addr=10.0.0.2, port=4420 00:18:34.950 [2024-07-12 16:22:18.566847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf8f0 is same with the state(5) to be set 00:18:34.950 [2024-07-12 16:22:18.566869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf8f0 (9): Bad file descriptor 00:18:34.950 [2024-07-12 16:22:18.566919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.950 [2024-07-12 16:22:18.566930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.950 [2024-07-12 16:22:18.566940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.950 [2024-07-12 16:22:18.566965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.950 [2024-07-12 16:22:18.566975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.876 [2024-07-12 16:22:20.567174] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.876 [2024-07-12 16:22:20.567256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf8f0 with addr=10.0.0.2, port=4420 00:18:36.876 [2024-07-12 16:22:20.567272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf8f0 is same with the state(5) to be set 00:18:36.876 [2024-07-12 16:22:20.567296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf8f0 (9): Bad file descriptor 00:18:36.876 [2024-07-12 16:22:20.567313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.876 [2024-07-12 16:22:20.567321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:36.876 [2024-07-12 16:22:20.567331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.876 [2024-07-12 16:22:20.567355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:36.876 [2024-07-12 16:22:20.567366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:39.402 [2024-07-12 16:22:22.567508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.402 [2024-07-12 16:22:22.567549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:39.403 [2024-07-12 16:22:22.567560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:39.403 [2024-07-12 16:22:22.567571] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:39.403 [2024-07-12 16:22:22.567596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.969 00:18:39.969 Latency(us) 00:18:39.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.969 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:39.969 NVMe0n1 : 8.13 2249.77 8.79 15.75 0.00 56442.27 7000.44 7015926.69 00:18:39.969 =================================================================================================================== 00:18:39.969 Total : 2249.77 8.79 15.75 0.00 56442.27 7000.44 7015926.69 00:18:39.969 0 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.969 Attaching 5 probes... 00:18:39.969 1240.752317: reset bdev controller NVMe0 00:18:39.969 1240.821982: reconnect bdev controller NVMe0 00:18:39.969 3241.122517: reconnect delay bdev controller NVMe0 00:18:39.969 3241.157678: reconnect bdev controller NVMe0 00:18:39.969 5241.516511: reconnect delay bdev controller NVMe0 00:18:39.969 5241.534677: reconnect bdev controller NVMe0 00:18:39.969 7241.942316: reconnect delay bdev controller NVMe0 00:18:39.969 7241.961507: reconnect bdev controller NVMe0 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 81861 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81858 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81858 ']' 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81858 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81858 00:18:39.969 killing process with pid 81858 00:18:39.969 Received shutdown signal, test time was about 8.181317 seconds 00:18:39.969 00:18:39.969 Latency(us) 00:18:39.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.969 =================================================================================================================== 00:18:39.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81858' 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81858 00:18:39.969 16:22:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81858 00:18:40.226 16:22:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.483 rmmod nvme_tcp 00:18:40.483 rmmod nvme_fabrics 00:18:40.483 rmmod nvme_keyring 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81423 ']' 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81423 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81423 ']' 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81423 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81423 00:18:40.483 killing process with pid 81423 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81423' 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81423 00:18:40.483 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81423 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:40.741 ************************************ 00:18:40.741 END TEST nvmf_timeout 00:18:40.741 ************************************ 00:18:40.741 00:18:40.741 real 0m45.389s 00:18:40.741 user 2m13.075s 00:18:40.741 sys 0m5.273s 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:40.741 16:22:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:40.741 16:22:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:40.741 16:22:24 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:18:40.741 16:22:24 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:18:40.741 16:22:24 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.741 16:22:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.741 16:22:24 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:18:40.741 00:18:40.741 real 11m46.807s 00:18:40.741 user 28m47.651s 00:18:40.741 sys 2m56.687s 00:18:40.741 ************************************ 00:18:40.741 END TEST nvmf_tcp 00:18:40.741 ************************************ 00:18:40.741 16:22:24 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:40.741 16:22:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.741 16:22:24 -- common/autotest_common.sh@1142 -- # return 0 00:18:40.741 16:22:24 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:18:40.741 16:22:24 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:40.741 16:22:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:40.741 16:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.741 16:22:24 -- common/autotest_common.sh@10 -- # set +x 00:18:41.000 ************************************ 00:18:41.000 START TEST nvmf_dif 00:18:41.000 ************************************ 00:18:41.000 16:22:24 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:41.000 * Looking for test storage... 00:18:41.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:41.000 16:22:24 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.000 16:22:24 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.000 16:22:24 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.000 16:22:24 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.000 16:22:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.000 16:22:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.000 16:22:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.000 16:22:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:41.000 16:22:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.000 16:22:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:41.000 16:22:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:41.000 16:22:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:41.000 16:22:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:41.000 16:22:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.000 16:22:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:41.000 16:22:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:41.000 Cannot find device "nvmf_tgt_br" 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@155 -- # true 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.000 Cannot find device "nvmf_tgt_br2" 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@156 -- # true 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:41.000 Cannot find device "nvmf_tgt_br" 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@158 -- # true 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:41.000 Cannot find device "nvmf_tgt_br2" 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@159 -- # true 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.000 16:22:24 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.257 16:22:24 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:41.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:41.258 00:18:41.258 --- 10.0.0.2 ping statistics --- 00:18:41.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.258 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:41.258 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.258 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:18:41.258 00:18:41.258 --- 10.0.0.3 ping statistics --- 00:18:41.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.258 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:41.258 00:18:41.258 --- 10.0.0.1 ping statistics --- 00:18:41.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.258 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:18:41.258 16:22:24 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:41.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:41.515 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:41.515 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:41.809 16:22:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:41.809 16:22:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:41.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82340 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82340 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 82340 ']' 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.809 16:22:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.809 16:22:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:41.809 [2024-07-12 16:22:25.357750] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:18:41.809 [2024-07-12 16:22:25.357833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.809 [2024-07-12 16:22:25.499931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.068 [2024-07-12 16:22:25.567463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.068 [2024-07-12 16:22:25.567523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.068 [2024-07-12 16:22:25.567538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.068 [2024-07-12 16:22:25.567549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.068 [2024-07-12 16:22:25.567557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.068 [2024-07-12 16:22:25.567586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.068 [2024-07-12 16:22:25.600215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:42.633 16:22:26 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.633 16:22:26 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:18:42.633 16:22:26 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.633 16:22:26 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.633 16:22:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 16:22:26 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.891 16:22:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:42.891 16:22:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:42.891 16:22:26 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.891 16:22:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 [2024-07-12 16:22:26.377084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.891 16:22:26 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.891 16:22:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:42.891 16:22:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:42.891 16:22:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.891 16:22:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 ************************************ 00:18:42.891 START TEST fio_dif_1_default 00:18:42.891 ************************************ 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 bdev_null0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:42.891 [2024-07-12 16:22:26.425152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:42.891 { 00:18:42.891 "params": { 00:18:42.891 "name": "Nvme$subsystem", 00:18:42.891 "trtype": "$TEST_TRANSPORT", 00:18:42.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.891 "adrfam": "ipv4", 00:18:42.891 "trsvcid": "$NVMF_PORT", 00:18:42.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.891 "hdgst": ${hdgst:-false}, 00:18:42.891 "ddgst": ${ddgst:-false} 00:18:42.891 }, 00:18:42.891 "method": "bdev_nvme_attach_controller" 00:18:42.891 } 00:18:42.891 EOF 00:18:42.891 )") 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:42.891 "params": { 00:18:42.891 "name": "Nvme0", 00:18:42.891 "trtype": "tcp", 00:18:42.891 "traddr": "10.0.0.2", 00:18:42.891 "adrfam": "ipv4", 00:18:42.891 "trsvcid": "4420", 00:18:42.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:42.891 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:42.891 "hdgst": false, 00:18:42.891 "ddgst": false 00:18:42.891 }, 00:18:42.891 "method": "bdev_nvme_attach_controller" 00:18:42.891 }' 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:42.891 16:22:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.150 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:43.150 fio-3.35 00:18:43.150 Starting 1 thread 00:18:55.361 00:18:55.361 filename0: (groupid=0, jobs=1): err= 0: pid=82407: Fri Jul 12 16:22:37 2024 00:18:55.361 read: IOPS=9145, BW=35.7MiB/s (37.5MB/s)(357MiB/10001msec) 00:18:55.361 slat (usec): min=6, max=205, avg= 8.40, stdev= 3.67 00:18:55.361 clat (usec): min=328, max=3015, avg=412.49, stdev=45.69 00:18:55.361 lat (usec): min=334, max=3040, avg=420.88, stdev=46.53 00:18:55.361 clat percentiles (usec): 00:18:55.361 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 375], 00:18:55.361 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 416], 00:18:55.361 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 490], 00:18:55.361 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 594], 00:18:55.361 | 99.99th=[ 799] 00:18:55.361 bw ( KiB/s): min=34645, max=38016, per=99.88%, avg=36541.74, stdev=990.15, samples=19 00:18:55.361 iops : min= 8661, max= 9504, avg=9135.42, stdev=247.56, samples=19 00:18:55.361 lat (usec) : 500=96.71%, 750=3.27%, 1000=0.01% 00:18:55.361 lat (msec) : 2=0.01%, 4=0.01% 00:18:55.361 cpu : usr=84.95%, sys=13.05%, ctx=48, majf=0, minf=0 00:18:55.361 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.361 issued rwts: total=91468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.361 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:55.361 00:18:55.361 Run status group 0 (all jobs): 00:18:55.361 READ: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=357MiB (375MB), run=10001-10001msec 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 ************************************ 00:18:55.361 END TEST fio_dif_1_default 00:18:55.361 ************************************ 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 00:18:55.361 real 0m10.876s 00:18:55.361 user 0m9.046s 00:18:55.361 sys 0m1.550s 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:18:55.361 16:22:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:55.361 16:22:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:55.361 16:22:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 ************************************ 00:18:55.361 START TEST fio_dif_1_multi_subsystems 00:18:55.361 ************************************ 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 bdev_null0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 [2024-07-12 16:22:37.351585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 bdev_null1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:55.361 { 00:18:55.361 "params": { 00:18:55.361 "name": "Nvme$subsystem", 00:18:55.361 "trtype": "$TEST_TRANSPORT", 00:18:55.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.361 "adrfam": "ipv4", 00:18:55.361 "trsvcid": "$NVMF_PORT", 00:18:55.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.361 "hdgst": ${hdgst:-false}, 00:18:55.361 "ddgst": ${ddgst:-false} 00:18:55.361 }, 00:18:55.361 "method": "bdev_nvme_attach_controller" 00:18:55.361 } 00:18:55.361 EOF 00:18:55.361 )") 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:55.361 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:55.362 { 00:18:55.362 "params": { 00:18:55.362 "name": "Nvme$subsystem", 00:18:55.362 "trtype": "$TEST_TRANSPORT", 00:18:55.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.362 "adrfam": "ipv4", 00:18:55.362 "trsvcid": "$NVMF_PORT", 00:18:55.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.362 "hdgst": ${hdgst:-false}, 00:18:55.362 "ddgst": ${ddgst:-false} 00:18:55.362 }, 00:18:55.362 "method": "bdev_nvme_attach_controller" 00:18:55.362 } 00:18:55.362 EOF 00:18:55.362 )") 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:55.362 "params": { 00:18:55.362 "name": "Nvme0", 00:18:55.362 "trtype": "tcp", 00:18:55.362 "traddr": "10.0.0.2", 00:18:55.362 "adrfam": "ipv4", 00:18:55.362 "trsvcid": "4420", 00:18:55.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:55.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:55.362 "hdgst": false, 00:18:55.362 "ddgst": false 00:18:55.362 }, 00:18:55.362 "method": "bdev_nvme_attach_controller" 00:18:55.362 },{ 00:18:55.362 "params": { 00:18:55.362 "name": "Nvme1", 00:18:55.362 "trtype": "tcp", 00:18:55.362 "traddr": "10.0.0.2", 00:18:55.362 "adrfam": "ipv4", 00:18:55.362 "trsvcid": "4420", 00:18:55.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.362 "hdgst": false, 00:18:55.362 "ddgst": false 00:18:55.362 }, 00:18:55.362 "method": "bdev_nvme_attach_controller" 00:18:55.362 }' 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:55.362 16:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:55.362 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:55.362 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:55.362 fio-3.35 00:18:55.362 Starting 2 threads 00:19:05.363 00:19:05.363 filename0: (groupid=0, jobs=1): err= 0: pid=82566: Fri Jul 12 16:22:48 2024 00:19:05.363 read: IOPS=4998, BW=19.5MiB/s (20.5MB/s)(195MiB/10001msec) 00:19:05.363 slat (usec): min=6, max=184, avg=13.25, stdev= 5.14 00:19:05.363 clat (usec): min=449, max=1306, avg=764.80, stdev=68.65 00:19:05.363 lat (usec): min=456, max=1331, avg=778.04, stdev=69.79 00:19:05.363 clat percentiles (usec): 00:19:05.364 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 709], 00:19:05.364 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 783], 00:19:05.364 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:19:05.364 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 988], 00:19:05.364 | 99.99th=[ 1188] 00:19:05.364 bw ( KiB/s): min=19424, max=20480, per=50.02%, avg=20003.37, stdev=372.10, samples=19 00:19:05.364 iops : min= 4856, max= 5120, avg=5000.84, stdev=93.02, samples=19 00:19:05.364 lat (usec) : 500=0.02%, 750=45.06%, 1000=54.89% 00:19:05.364 lat (msec) : 2=0.04% 00:19:05.364 cpu : usr=89.55%, sys=9.03%, ctx=38, majf=0, minf=9 00:19:05.364 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.364 issued rwts: total=49992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.364 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:05.364 filename1: (groupid=0, jobs=1): err= 0: pid=82567: Fri Jul 12 16:22:48 2024 00:19:05.364 read: IOPS=4997, BW=19.5MiB/s (20.5MB/s)(195MiB/10001msec) 00:19:05.364 slat (nsec): min=6405, max=83932, avg=13489.98, stdev=5205.33 00:19:05.364 clat (usec): min=598, max=1720, avg=763.13, stdev=64.06 00:19:05.364 lat (usec): min=617, max=1733, avg=776.62, stdev=64.78 00:19:05.364 clat percentiles (usec): 00:19:05.364 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 709], 00:19:05.364 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:19:05.364 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 873], 00:19:05.364 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 979], 00:19:05.364 | 99.99th=[ 1369] 00:19:05.364 bw ( KiB/s): min=19424, max=20480, per=50.01%, avg=20000.00, stdev=369.97, samples=19 00:19:05.364 iops : min= 4856, max= 5120, avg=5000.00, stdev=92.49, samples=19 00:19:05.364 lat (usec) : 750=46.94%, 1000=53.03% 00:19:05.364 lat (msec) : 2=0.03% 00:19:05.364 cpu : usr=89.44%, sys=9.13%, ctx=19, majf=0, minf=0 00:19:05.364 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.364 issued rwts: total=49984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.364 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:05.364 00:19:05.364 Run status group 0 (all jobs): 00:19:05.364 READ: bw=39.0MiB/s (40.9MB/s), 19.5MiB/s-19.5MiB/s (20.5MB/s-20.5MB/s), io=391MiB (410MB), run=10001-10001msec 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 ************************************ 00:19:05.364 END TEST fio_dif_1_multi_subsystems 00:19:05.364 ************************************ 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 00:19:05.364 real 0m10.990s 00:19:05.364 user 0m18.558s 00:19:05.364 sys 0m2.062s 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 16:22:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:05.364 16:22:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:05.364 16:22:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:05.364 16:22:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 ************************************ 00:19:05.364 START TEST fio_dif_rand_params 00:19:05.364 ************************************ 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 bdev_null0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.364 [2024-07-12 16:22:48.393768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:05.364 { 00:19:05.364 "params": { 00:19:05.364 "name": "Nvme$subsystem", 00:19:05.364 "trtype": "$TEST_TRANSPORT", 00:19:05.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.364 "adrfam": "ipv4", 00:19:05.364 "trsvcid": "$NVMF_PORT", 00:19:05.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.364 "hdgst": ${hdgst:-false}, 00:19:05.364 "ddgst": ${ddgst:-false} 00:19:05.364 }, 00:19:05.364 "method": "bdev_nvme_attach_controller" 00:19:05.364 } 00:19:05.364 EOF 00:19:05.364 )") 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.364 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:05.365 "params": { 00:19:05.365 "name": "Nvme0", 00:19:05.365 "trtype": "tcp", 00:19:05.365 "traddr": "10.0.0.2", 00:19:05.365 "adrfam": "ipv4", 00:19:05.365 "trsvcid": "4420", 00:19:05.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:05.365 "hdgst": false, 00:19:05.365 "ddgst": false 00:19:05.365 }, 00:19:05.365 "method": "bdev_nvme_attach_controller" 00:19:05.365 }' 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.365 16:22:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.365 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:05.365 ... 00:19:05.365 fio-3.35 00:19:05.365 Starting 3 threads 00:19:10.635 00:19:10.635 filename0: (groupid=0, jobs=1): err= 0: pid=82717: Fri Jul 12 16:22:54 2024 00:19:10.635 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5003msec) 00:19:10.635 slat (nsec): min=7151, max=63673, avg=17385.85, stdev=6492.20 00:19:10.635 clat (usec): min=10467, max=12767, avg=11567.09, stdev=495.32 00:19:10.635 lat (usec): min=10480, max=12806, avg=11584.48, stdev=496.05 00:19:10.635 clat percentiles (usec): 00:19:10.635 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:19:10.635 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:19:10.635 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:19:10.635 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:19:10.635 | 99.99th=[12780] 00:19:10.635 bw ( KiB/s): min=32256, max=33792, per=33.47%, avg=33194.67, stdev=640.00, samples=9 00:19:10.635 iops : min= 252, max= 264, avg=259.33, stdev= 5.00, samples=9 00:19:10.635 lat (msec) : 20=100.00% 00:19:10.635 cpu : usr=90.02%, sys=9.40%, ctx=12, majf=0, minf=9 00:19:10.635 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.635 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:10.635 filename0: (groupid=0, jobs=1): err= 0: pid=82718: Fri Jul 12 16:22:54 2024 00:19:10.635 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5004msec) 00:19:10.635 slat (nsec): min=7292, max=63490, avg=17062.31, stdev=6391.00 00:19:10.635 clat (usec): min=10474, max=12853, avg=11570.34, stdev=499.19 00:19:10.635 lat (usec): min=10486, max=12898, avg=11587.41, stdev=499.82 00:19:10.635 clat percentiles (usec): 00:19:10.635 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:19:10.635 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:19:10.635 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:19:10.635 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12911], 00:19:10.635 | 99.99th=[12911] 00:19:10.635 bw ( KiB/s): min=32256, max=33792, per=33.47%, avg=33194.67, stdev=640.00, samples=9 00:19:10.635 iops : min= 252, max= 264, avg=259.33, stdev= 5.00, samples=9 00:19:10.635 lat (msec) : 20=100.00% 00:19:10.635 cpu : usr=91.17%, sys=8.20%, ctx=11, majf=0, minf=9 00:19:10.635 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.635 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:10.635 filename0: (groupid=0, jobs=1): err= 0: pid=82719: Fri Jul 12 16:22:54 2024 00:19:10.635 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5006msec) 00:19:10.635 slat (nsec): min=6830, max=60734, avg=16023.04, stdev=6545.06 00:19:10.635 clat (usec): min=10474, max=14950, avg=11577.99, stdev=519.34 00:19:10.635 lat (usec): min=10487, max=14979, avg=11594.02, stdev=520.02 00:19:10.635 clat percentiles (usec): 00:19:10.635 | 1.00th=[10683], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:19:10.635 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:19:10.635 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:19:10.635 | 99.00th=[12518], 99.50th=[12649], 99.90th=[14877], 99.95th=[15008], 00:19:10.635 | 99.99th=[15008] 00:19:10.635 bw ( KiB/s): min=32256, max=34560, per=33.37%, avg=33102.00, stdev=810.71, samples=9 00:19:10.635 iops : min= 252, max= 270, avg=258.56, stdev= 6.35, samples=9 00:19:10.635 lat (msec) : 20=100.00% 00:19:10.635 cpu : usr=90.83%, sys=8.57%, ctx=13, majf=0, minf=9 00:19:10.635 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.635 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:10.635 00:19:10.635 Run status group 0 (all jobs): 00:19:10.635 READ: bw=96.9MiB/s (102MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=485MiB (508MB), run=5003-5006msec 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.635 bdev_null0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.635 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 [2024-07-12 16:22:54.233762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 bdev_null1 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 bdev_null2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.636 { 00:19:10.636 "params": { 00:19:10.636 "name": "Nvme$subsystem", 00:19:10.636 "trtype": "$TEST_TRANSPORT", 00:19:10.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.636 "adrfam": "ipv4", 00:19:10.636 "trsvcid": "$NVMF_PORT", 00:19:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.636 "hdgst": ${hdgst:-false}, 00:19:10.636 "ddgst": ${ddgst:-false} 00:19:10.636 }, 00:19:10.636 "method": "bdev_nvme_attach_controller" 00:19:10.636 } 00:19:10.636 EOF 00:19:10.636 )") 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.636 { 00:19:10.636 "params": { 00:19:10.636 "name": "Nvme$subsystem", 00:19:10.636 "trtype": "$TEST_TRANSPORT", 00:19:10.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.636 "adrfam": "ipv4", 00:19:10.636 "trsvcid": "$NVMF_PORT", 00:19:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.636 "hdgst": ${hdgst:-false}, 00:19:10.636 "ddgst": ${ddgst:-false} 00:19:10.636 }, 00:19:10.636 "method": "bdev_nvme_attach_controller" 00:19:10.636 } 00:19:10.636 EOF 00:19:10.636 )") 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:10.636 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.637 { 00:19:10.637 "params": { 00:19:10.637 "name": "Nvme$subsystem", 00:19:10.637 "trtype": "$TEST_TRANSPORT", 00:19:10.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.637 "adrfam": "ipv4", 00:19:10.637 "trsvcid": "$NVMF_PORT", 00:19:10.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.637 "hdgst": ${hdgst:-false}, 00:19:10.637 "ddgst": ${ddgst:-false} 00:19:10.637 }, 00:19:10.637 "method": "bdev_nvme_attach_controller" 00:19:10.637 } 00:19:10.637 EOF 00:19:10.637 )") 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:10.637 "params": { 00:19:10.637 "name": "Nvme0", 00:19:10.637 "trtype": "tcp", 00:19:10.637 "traddr": "10.0.0.2", 00:19:10.637 "adrfam": "ipv4", 00:19:10.637 "trsvcid": "4420", 00:19:10.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:10.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:10.637 "hdgst": false, 00:19:10.637 "ddgst": false 00:19:10.637 }, 00:19:10.637 "method": "bdev_nvme_attach_controller" 00:19:10.637 },{ 00:19:10.637 "params": { 00:19:10.637 "name": "Nvme1", 00:19:10.637 "trtype": "tcp", 00:19:10.637 "traddr": "10.0.0.2", 00:19:10.637 "adrfam": "ipv4", 00:19:10.637 "trsvcid": "4420", 00:19:10.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.637 "hdgst": false, 00:19:10.637 "ddgst": false 00:19:10.637 }, 00:19:10.637 "method": "bdev_nvme_attach_controller" 00:19:10.637 },{ 00:19:10.637 "params": { 00:19:10.637 "name": "Nvme2", 00:19:10.637 "trtype": "tcp", 00:19:10.637 "traddr": "10.0.0.2", 00:19:10.637 "adrfam": "ipv4", 00:19:10.637 "trsvcid": "4420", 00:19:10.637 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:10.637 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:10.637 "hdgst": false, 00:19:10.637 "ddgst": false 00:19:10.637 }, 00:19:10.637 "method": "bdev_nvme_attach_controller" 00:19:10.637 }' 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:10.637 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:10.894 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:10.894 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:10.894 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.894 16:22:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.894 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:10.894 ... 00:19:10.894 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:10.894 ... 00:19:10.894 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:10.894 ... 00:19:10.894 fio-3.35 00:19:10.894 Starting 24 threads 00:19:23.124 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82820: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=179, BW=718KiB/s (735kB/s)(7184KiB/10006msec) 00:19:23.124 slat (usec): min=3, max=4025, avg=16.63, stdev=94.77 00:19:23.124 clat (msec): min=5, max=192, avg=89.06, stdev=28.98 00:19:23.124 lat (msec): min=5, max=192, avg=89.08, stdev=28.98 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 69], 00:19:23.124 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 96], 00:19:23.124 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 125], 95.00th=[ 142], 00:19:23.124 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.124 | 99.99th=[ 192] 00:19:23.124 bw ( KiB/s): min= 512, max= 1024, per=4.34%, avg=710.37, stdev=125.69, samples=19 00:19:23.124 iops : min= 128, max= 256, avg=177.58, stdev=31.43, samples=19 00:19:23.124 lat (msec) : 10=0.33%, 20=0.89%, 50=8.41%, 100=55.07%, 250=35.30% 00:19:23.124 cpu : usr=34.42%, sys=2.02%, ctx=987, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 issued rwts: total=1796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82821: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=177, BW=711KiB/s (728kB/s)(7152KiB/10065msec) 00:19:23.124 slat (usec): min=5, max=8044, avg=28.01, stdev=328.44 00:19:23.124 clat (msec): min=4, max=202, avg=89.80, stdev=33.82 00:19:23.124 lat (msec): min=4, max=202, avg=89.83, stdev=33.82 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 50], 20.00th=[ 67], 00:19:23.124 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 104], 00:19:23.124 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 130], 95.00th=[ 144], 00:19:23.124 | 99.00th=[ 167], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 203], 00:19:23.124 | 99.99th=[ 203] 00:19:23.124 bw ( KiB/s): min= 480, max= 1520, per=4.33%, avg=708.65, stdev=224.81, samples=20 00:19:23.124 iops : min= 120, max= 380, avg=177.15, stdev=56.20, samples=20 00:19:23.124 lat (msec) : 10=4.36%, 20=1.01%, 50=5.37%, 100=47.04%, 250=42.23% 00:19:23.124 cpu : usr=31.04%, sys=2.06%, ctx=946, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=80.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82822: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=163, BW=654KiB/s (670kB/s)(6544KiB/10002msec) 00:19:23.124 slat (usec): min=4, max=8026, avg=33.19, stdev=344.44 00:19:23.124 clat (msec): min=10, max=196, avg=97.64, stdev=29.40 00:19:23.124 lat (msec): min=10, max=196, avg=97.67, stdev=29.41 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 33], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 71], 00:19:23.124 | 30.00th=[ 81], 40.00th=[ 89], 50.00th=[ 101], 60.00th=[ 108], 00:19:23.124 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 146], 00:19:23.124 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 197], 99.95th=[ 197], 00:19:23.124 | 99.99th=[ 197] 00:19:23.124 bw ( KiB/s): min= 400, max= 920, per=3.94%, avg=645.47, stdev=135.82, samples=19 00:19:23.124 iops : min= 100, max= 230, avg=161.37, stdev=33.95, samples=19 00:19:23.124 lat (msec) : 20=0.98%, 50=5.13%, 100=43.64%, 250=50.24% 00:19:23.124 cpu : usr=33.98%, sys=4.52%, ctx=1017, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=71.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:19:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 complete : 0=0.0%, 4=90.2%, 8=7.3%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 issued rwts: total=1636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82823: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=167, BW=669KiB/s (685kB/s)(6704KiB/10021msec) 00:19:23.124 slat (usec): min=3, max=8026, avg=25.75, stdev=293.52 00:19:23.124 clat (msec): min=35, max=193, avg=95.48, stdev=29.08 00:19:23.124 lat (msec): min=35, max=193, avg=95.51, stdev=29.08 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:19:23.124 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 108], 00:19:23.124 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 144], 00:19:23.124 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 194], 99.95th=[ 194], 00:19:23.124 | 99.99th=[ 194] 00:19:23.124 bw ( KiB/s): min= 400, max= 904, per=4.06%, avg=664.00, stdev=138.81, samples=20 00:19:23.124 iops : min= 100, max= 226, avg=166.00, stdev=34.70, samples=20 00:19:23.124 lat (msec) : 50=5.79%, 100=48.15%, 250=46.06% 00:19:23.124 cpu : usr=33.19%, sys=2.26%, ctx=923, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 issued rwts: total=1676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82824: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=168, BW=676KiB/s (692kB/s)(6760KiB/10003msec) 00:19:23.124 slat (usec): min=4, max=8025, avg=25.35, stdev=250.74 00:19:23.124 clat (msec): min=2, max=192, avg=94.57, stdev=30.92 00:19:23.124 lat (msec): min=2, max=192, avg=94.60, stdev=30.92 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 71], 00:19:23.124 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 108], 00:19:23.124 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 144], 00:19:23.124 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.124 | 99.99th=[ 192] 00:19:23.124 bw ( KiB/s): min= 496, max= 1024, per=4.02%, avg=657.68, stdev=145.19, samples=19 00:19:23.124 iops : min= 124, max= 256, avg=164.42, stdev=36.30, samples=19 00:19:23.124 lat (msec) : 4=0.59%, 10=0.71%, 20=0.77%, 50=5.56%, 100=45.50% 00:19:23.124 lat (msec) : 250=46.86% 00:19:23.124 cpu : usr=32.41%, sys=1.74%, ctx=1287, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=2.2%, 4=8.9%, 8=74.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:19:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 complete : 0=0.0%, 4=89.4%, 8=8.7%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 issued rwts: total=1690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82825: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=171, BW=687KiB/s (703kB/s)(6884KiB/10027msec) 00:19:23.124 slat (usec): min=4, max=8026, avg=26.00, stdev=289.60 00:19:23.124 clat (msec): min=37, max=193, avg=93.07, stdev=30.94 00:19:23.124 lat (msec): min=37, max=193, avg=93.10, stdev=30.94 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 68], 00:19:23.124 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 105], 00:19:23.124 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 148], 00:19:23.124 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 194], 99.95th=[ 194], 00:19:23.124 | 99.99th=[ 194] 00:19:23.124 bw ( KiB/s): min= 400, max= 1024, per=4.16%, avg=682.00, stdev=158.09, samples=20 00:19:23.124 iops : min= 100, max= 256, avg=170.50, stdev=39.52, samples=20 00:19:23.124 lat (msec) : 50=6.33%, 100=51.95%, 250=41.72% 00:19:23.124 cpu : usr=35.35%, sys=2.14%, ctx=1202, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.124 issued rwts: total=1721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.124 filename0: (groupid=0, jobs=1): err= 0: pid=82826: Fri Jul 12 16:23:05 2024 00:19:23.124 read: IOPS=160, BW=641KiB/s (656kB/s)(6444KiB/10053msec) 00:19:23.124 slat (usec): min=6, max=9029, avg=35.51, stdev=411.19 00:19:23.124 clat (msec): min=16, max=204, avg=99.44, stdev=31.17 00:19:23.124 lat (msec): min=16, max=204, avg=99.48, stdev=31.17 00:19:23.124 clat percentiles (msec): 00:19:23.124 | 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 62], 20.00th=[ 73], 00:19:23.124 | 30.00th=[ 81], 40.00th=[ 95], 50.00th=[ 105], 60.00th=[ 108], 00:19:23.124 | 70.00th=[ 112], 80.00th=[ 124], 90.00th=[ 142], 95.00th=[ 148], 00:19:23.124 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 199], 99.95th=[ 205], 00:19:23.124 | 99.99th=[ 205] 00:19:23.124 bw ( KiB/s): min= 400, max= 1005, per=3.91%, avg=640.65, stdev=151.56, samples=20 00:19:23.124 iops : min= 100, max= 251, avg=160.15, stdev=37.86, samples=20 00:19:23.124 lat (msec) : 20=0.87%, 50=4.59%, 100=42.27%, 250=52.27% 00:19:23.124 cpu : usr=33.35%, sys=2.29%, ctx=2374, majf=0, minf=9 00:19:23.124 IO depths : 1=0.1%, 2=2.5%, 4=9.8%, 8=72.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=90.4%, 8=7.4%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename0: (groupid=0, jobs=1): err= 0: pid=82827: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=156, BW=626KiB/s (641kB/s)(6280KiB/10032msec) 00:19:23.125 slat (usec): min=3, max=12036, avg=31.44, stdev=416.88 00:19:23.125 clat (msec): min=38, max=192, avg=102.07, stdev=28.96 00:19:23.125 lat (msec): min=38, max=192, avg=102.10, stdev=28.96 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 66], 20.00th=[ 72], 00:19:23.125 | 30.00th=[ 83], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 110], 00:19:23.125 | 70.00th=[ 116], 80.00th=[ 122], 90.00th=[ 144], 95.00th=[ 150], 00:19:23.125 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.125 | 99.99th=[ 192] 00:19:23.125 bw ( KiB/s): min= 400, max= 840, per=3.80%, avg=621.65, stdev=125.11, samples=20 00:19:23.125 iops : min= 100, max= 210, avg=155.40, stdev=31.26, samples=20 00:19:23.125 lat (msec) : 50=4.14%, 100=39.36%, 250=56.50% 00:19:23.125 cpu : usr=31.62%, sys=1.82%, ctx=1435, majf=0, minf=9 00:19:23.125 IO depths : 1=0.1%, 2=3.4%, 4=13.4%, 8=68.8%, 16=14.3%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=91.0%, 8=6.0%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82828: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=166, BW=665KiB/s (681kB/s)(6660KiB/10011msec) 00:19:23.125 slat (nsec): min=5509, max=45291, avg=13798.31, stdev=4750.85 00:19:23.125 clat (msec): min=12, max=191, avg=96.11, stdev=30.81 00:19:23.125 lat (msec): min=12, max=191, avg=96.13, stdev=30.81 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 71], 00:19:23.125 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 108], 00:19:23.125 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 144], 00:19:23.125 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.125 | 99.99th=[ 192] 00:19:23.125 bw ( KiB/s): min= 384, max= 984, per=4.01%, avg=656.42, stdev=149.75, samples=19 00:19:23.125 iops : min= 96, max= 246, avg=164.11, stdev=37.44, samples=19 00:19:23.125 lat (msec) : 20=0.78%, 50=6.01%, 100=46.25%, 250=46.97% 00:19:23.125 cpu : usr=30.06%, sys=1.90%, ctx=956, majf=0, minf=9 00:19:23.125 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82829: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=170, BW=683KiB/s (699kB/s)(6864KiB/10052msec) 00:19:23.125 slat (usec): min=4, max=9030, avg=19.38, stdev=217.70 00:19:23.125 clat (msec): min=11, max=239, avg=93.45, stdev=32.31 00:19:23.125 lat (msec): min=11, max=239, avg=93.47, stdev=32.30 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 28], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 69], 00:19:23.125 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 105], 00:19:23.125 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 161], 00:19:23.125 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 197], 99.95th=[ 241], 00:19:23.125 | 99.99th=[ 241] 00:19:23.125 bw ( KiB/s): min= 384, max= 920, per=4.17%, avg=682.70, stdev=134.31, samples=20 00:19:23.125 iops : min= 96, max= 230, avg=170.65, stdev=33.54, samples=20 00:19:23.125 lat (msec) : 20=0.93%, 50=6.64%, 100=51.34%, 250=41.08% 00:19:23.125 cpu : usr=32.65%, sys=2.04%, ctx=1026, majf=0, minf=9 00:19:23.125 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82830: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=172, BW=692KiB/s (708kB/s)(6940KiB/10033msec) 00:19:23.125 slat (usec): min=4, max=8024, avg=24.83, stdev=273.18 00:19:23.125 clat (msec): min=41, max=210, avg=92.38, stdev=28.72 00:19:23.125 lat (msec): min=41, max=210, avg=92.41, stdev=28.71 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 59], 20.00th=[ 69], 00:19:23.125 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 103], 00:19:23.125 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 129], 95.00th=[ 144], 00:19:23.125 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 211], 00:19:23.125 | 99.99th=[ 211] 00:19:23.125 bw ( KiB/s): min= 480, max= 976, per=4.20%, avg=687.60, stdev=135.73, samples=20 00:19:23.125 iops : min= 120, max= 244, avg=171.90, stdev=33.93, samples=20 00:19:23.125 lat (msec) : 50=6.63%, 100=52.45%, 250=40.92% 00:19:23.125 cpu : usr=33.91%, sys=2.23%, ctx=1085, majf=0, minf=9 00:19:23.125 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82831: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=171, BW=687KiB/s (703kB/s)(6912KiB/10064msec) 00:19:23.125 slat (usec): min=4, max=4032, avg=15.76, stdev=96.79 00:19:23.125 clat (msec): min=3, max=199, avg=92.94, stdev=35.73 00:19:23.125 lat (msec): min=3, max=199, avg=92.95, stdev=35.73 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 49], 20.00th=[ 69], 00:19:23.125 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 96], 60.00th=[ 107], 00:19:23.125 | 70.00th=[ 110], 80.00th=[ 120], 90.00th=[ 134], 95.00th=[ 153], 00:19:23.125 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 201], 99.95th=[ 201], 00:19:23.125 | 99.99th=[ 201] 00:19:23.125 bw ( KiB/s): min= 400, max= 1408, per=4.18%, avg=684.65, stdev=207.81, samples=20 00:19:23.125 iops : min= 100, max= 352, avg=171.15, stdev=51.95, samples=20 00:19:23.125 lat (msec) : 4=0.64%, 10=3.88%, 20=1.04%, 50=4.92%, 100=42.42% 00:19:23.125 lat (msec) : 250=47.11% 00:19:23.125 cpu : usr=33.70%, sys=2.15%, ctx=1516, majf=0, minf=9 00:19:23.125 IO depths : 1=0.2%, 2=1.7%, 4=6.4%, 8=76.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=89.3%, 8=9.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82832: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=196, BW=785KiB/s (804kB/s)(7852KiB/10001msec) 00:19:23.125 slat (usec): min=4, max=630, avg=13.92, stdev=15.10 00:19:23.125 clat (usec): min=1308, max=192394, avg=81394.79, stdev=39441.47 00:19:23.125 lat (usec): min=1315, max=192409, avg=81408.72, stdev=39441.71 00:19:23.125 clat percentiles (usec): 00:19:23.125 | 1.00th=[ 1418], 5.00th=[ 1532], 10.00th=[ 5276], 20.00th=[ 55837], 00:19:23.125 | 30.00th=[ 68682], 40.00th=[ 71828], 50.00th=[ 80217], 60.00th=[ 89654], 00:19:23.125 | 70.00th=[106431], 80.00th=[112722], 90.00th=[129500], 95.00th=[143655], 00:19:23.125 | 99.00th=[160433], 99.50th=[170918], 99.90th=[191890], 99.95th=[191890], 00:19:23.125 | 99.99th=[191890] 00:19:23.125 bw ( KiB/s): min= 400, max= 1063, per=4.27%, avg=699.42, stdev=158.30, samples=19 00:19:23.125 iops : min= 100, max= 265, avg=174.79, stdev=39.48, samples=19 00:19:23.125 lat (msec) : 2=8.30%, 4=1.27%, 10=0.66%, 20=0.66%, 50=7.59% 00:19:23.125 lat (msec) : 100=47.17%, 250=34.34% 00:19:23.125 cpu : usr=34.41%, sys=2.32%, ctx=2762, majf=0, minf=9 00:19:23.125 IO depths : 1=0.4%, 2=1.1%, 4=3.1%, 8=80.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82833: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=167, BW=670KiB/s (686kB/s)(6716KiB/10026msec) 00:19:23.125 slat (usec): min=3, max=8026, avg=26.07, stdev=293.18 00:19:23.125 clat (msec): min=36, max=203, avg=95.38, stdev=30.30 00:19:23.125 lat (msec): min=36, max=203, avg=95.40, stdev=30.30 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:19:23.125 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 106], 00:19:23.125 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 140], 95.00th=[ 153], 00:19:23.125 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:19:23.125 | 99.99th=[ 205] 00:19:23.125 bw ( KiB/s): min= 384, max= 968, per=4.07%, avg=665.20, stdev=143.14, samples=20 00:19:23.125 iops : min= 96, max= 242, avg=166.30, stdev=35.78, samples=20 00:19:23.125 lat (msec) : 50=5.36%, 100=51.64%, 250=43.00% 00:19:23.125 cpu : usr=31.42%, sys=2.09%, ctx=954, majf=0, minf=9 00:19:23.125 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:23.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.125 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.125 filename1: (groupid=0, jobs=1): err= 0: pid=82834: Fri Jul 12 16:23:05 2024 00:19:23.125 read: IOPS=171, BW=686KiB/s (702kB/s)(6868KiB/10012msec) 00:19:23.125 slat (usec): min=4, max=8028, avg=30.41, stdev=348.31 00:19:23.125 clat (msec): min=10, max=192, avg=93.14, stdev=30.01 00:19:23.125 lat (msec): min=10, max=192, avg=93.17, stdev=30.00 00:19:23.125 clat percentiles (msec): 00:19:23.125 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 68], 00:19:23.125 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 107], 00:19:23.125 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 142], 00:19:23.125 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.125 | 99.99th=[ 192] 00:19:23.125 bw ( KiB/s): min= 512, max= 920, per=4.14%, avg=677.53, stdev=139.99, samples=19 00:19:23.125 iops : min= 128, max= 230, avg=169.37, stdev=35.01, samples=19 00:19:23.126 lat (msec) : 20=0.93%, 50=6.17%, 100=49.16%, 250=43.74% 00:19:23.126 cpu : usr=35.69%, sys=2.07%, ctx=983, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=1.3%, 4=5.5%, 8=78.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename1: (groupid=0, jobs=1): err= 0: pid=82835: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=169, BW=680KiB/s (696kB/s)(6800KiB/10006msec) 00:19:23.126 slat (usec): min=5, max=590, avg=14.47, stdev=14.74 00:19:23.126 clat (msec): min=8, max=192, avg=94.08, stdev=32.91 00:19:23.126 lat (msec): min=8, max=192, avg=94.10, stdev=32.91 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 70], 00:19:23.126 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 90], 60.00th=[ 108], 00:19:23.126 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 161], 00:19:23.126 | 99.00th=[ 171], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.126 | 99.99th=[ 192] 00:19:23.126 bw ( KiB/s): min= 384, max= 1072, per=4.10%, avg=671.16, stdev=175.51, samples=19 00:19:23.126 iops : min= 96, max= 268, avg=167.79, stdev=43.88, samples=19 00:19:23.126 lat (msec) : 10=0.65%, 20=0.53%, 50=8.00%, 100=47.41%, 250=43.41% 00:19:23.126 cpu : usr=30.74%, sys=1.94%, ctx=911, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=1.2%, 4=5.1%, 8=78.6%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=88.1%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82836: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=178, BW=714KiB/s (731kB/s)(7192KiB/10068msec) 00:19:23.126 slat (usec): min=3, max=8045, avg=33.11, stdev=353.97 00:19:23.126 clat (msec): min=6, max=202, avg=89.27, stdev=32.42 00:19:23.126 lat (msec): min=6, max=210, avg=89.30, stdev=32.43 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 7], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 67], 00:19:23.126 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 102], 00:19:23.126 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 129], 95.00th=[ 144], 00:19:23.126 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 203], 00:19:23.126 | 99.99th=[ 203] 00:19:23.126 bw ( KiB/s): min= 504, max= 1424, per=4.35%, avg=712.85, stdev=208.27, samples=20 00:19:23.126 iops : min= 126, max= 356, avg=178.20, stdev=52.07, samples=20 00:19:23.126 lat (msec) : 10=3.45%, 20=1.00%, 50=7.06%, 100=47.78%, 250=40.71% 00:19:23.126 cpu : usr=35.73%, sys=2.66%, ctx=945, majf=0, minf=9 00:19:23.126 IO depths : 1=0.2%, 2=0.6%, 4=1.7%, 8=81.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82837: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=176, BW=707KiB/s (724kB/s)(7100KiB/10038msec) 00:19:23.126 slat (usec): min=5, max=7037, avg=18.30, stdev=166.89 00:19:23.126 clat (msec): min=33, max=203, avg=90.40, stdev=27.70 00:19:23.126 lat (msec): min=34, max=203, avg=90.42, stdev=27.71 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 68], 00:19:23.126 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 99], 00:19:23.126 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 126], 95.00th=[ 142], 00:19:23.126 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 205], 00:19:23.126 | 99.99th=[ 205] 00:19:23.126 bw ( KiB/s): min= 512, max= 1024, per=4.30%, avg=703.60, stdev=135.23, samples=20 00:19:23.126 iops : min= 128, max= 256, avg=175.90, stdev=33.81, samples=20 00:19:23.126 lat (msec) : 50=6.93%, 100=54.82%, 250=38.25% 00:19:23.126 cpu : usr=34.50%, sys=2.05%, ctx=2416, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82838: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=170, BW=681KiB/s (698kB/s)(6860KiB/10071msec) 00:19:23.126 slat (usec): min=3, max=8027, avg=21.86, stdev=229.02 00:19:23.126 clat (msec): min=7, max=215, avg=93.68, stdev=33.79 00:19:23.126 lat (msec): min=7, max=215, avg=93.70, stdev=33.79 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 53], 20.00th=[ 68], 00:19:23.126 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 108], 00:19:23.126 | 70.00th=[ 113], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 146], 00:19:23.126 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 203], 99.95th=[ 215], 00:19:23.126 | 99.99th=[ 215] 00:19:23.126 bw ( KiB/s): min= 480, max= 1269, per=4.14%, avg=678.75, stdev=184.86, samples=20 00:19:23.126 iops : min= 120, max= 317, avg=169.65, stdev=46.18, samples=20 00:19:23.126 lat (msec) : 10=1.40%, 20=1.40%, 50=6.65%, 100=45.89%, 250=44.66% 00:19:23.126 cpu : usr=36.09%, sys=2.30%, ctx=1392, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=76.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=89.3%, 8=9.3%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82839: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=160, BW=644KiB/s (659kB/s)(6476KiB/10059msec) 00:19:23.126 slat (usec): min=7, max=8040, avg=41.11, stdev=429.38 00:19:23.126 clat (msec): min=11, max=199, avg=99.03, stdev=34.05 00:19:23.126 lat (msec): min=11, max=199, avg=99.07, stdev=34.04 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 14], 5.00th=[ 46], 10.00th=[ 59], 20.00th=[ 70], 00:19:23.126 | 30.00th=[ 80], 40.00th=[ 91], 50.00th=[ 106], 60.00th=[ 110], 00:19:23.126 | 70.00th=[ 114], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 155], 00:19:23.126 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 201], 00:19:23.126 | 99.99th=[ 201] 00:19:23.126 bw ( KiB/s): min= 400, max= 1017, per=3.91%, avg=640.85, stdev=158.15, samples=20 00:19:23.126 iops : min= 100, max= 254, avg=160.20, stdev=39.51, samples=20 00:19:23.126 lat (msec) : 20=2.96%, 50=4.45%, 100=36.94%, 250=55.65% 00:19:23.126 cpu : usr=34.06%, sys=2.05%, ctx=1192, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=73.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=90.2%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82840: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=167, BW=671KiB/s (687kB/s)(6712KiB/10010msec) 00:19:23.126 slat (usec): min=4, max=12031, avg=30.71, stdev=392.50 00:19:23.126 clat (msec): min=14, max=188, avg=95.31, stdev=30.05 00:19:23.126 lat (msec): min=14, max=188, avg=95.34, stdev=30.06 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 71], 00:19:23.126 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 95], 60.00th=[ 108], 00:19:23.126 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 148], 00:19:23.126 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 188], 00:19:23.126 | 99.99th=[ 188] 00:19:23.126 bw ( KiB/s): min= 384, max= 990, per=4.08%, avg=667.10, stdev=142.97, samples=20 00:19:23.126 iops : min= 96, max= 247, avg=166.75, stdev=35.68, samples=20 00:19:23.126 lat (msec) : 20=0.42%, 50=5.42%, 100=47.62%, 250=46.54% 00:19:23.126 cpu : usr=34.29%, sys=2.14%, ctx=2468, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82841: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=179, BW=716KiB/s (733kB/s)(7172KiB/10013msec) 00:19:23.126 slat (usec): min=4, max=4023, avg=18.72, stdev=98.17 00:19:23.126 clat (msec): min=15, max=192, avg=89.26, stdev=27.82 00:19:23.126 lat (msec): min=15, max=192, avg=89.28, stdev=27.83 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 67], 00:19:23.126 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 96], 00:19:23.126 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 126], 95.00th=[ 138], 00:19:23.126 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 192], 00:19:23.126 | 99.99th=[ 192] 00:19:23.126 bw ( KiB/s): min= 560, max= 1021, per=4.35%, avg=712.30, stdev=117.20, samples=20 00:19:23.126 iops : min= 140, max= 255, avg=178.05, stdev=29.28, samples=20 00:19:23.126 lat (msec) : 20=0.67%, 50=7.25%, 100=55.94%, 250=36.14% 00:19:23.126 cpu : usr=44.73%, sys=2.98%, ctx=7177, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.126 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.126 filename2: (groupid=0, jobs=1): err= 0: pid=82842: Fri Jul 12 16:23:05 2024 00:19:23.126 read: IOPS=167, BW=671KiB/s (687kB/s)(6720KiB/10011msec) 00:19:23.126 slat (usec): min=5, max=5035, avg=17.48, stdev=122.94 00:19:23.126 clat (msec): min=36, max=189, avg=95.24, stdev=30.00 00:19:23.126 lat (msec): min=36, max=189, avg=95.26, stdev=30.00 00:19:23.126 clat percentiles (msec): 00:19:23.126 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 62], 20.00th=[ 70], 00:19:23.126 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 95], 60.00th=[ 106], 00:19:23.126 | 70.00th=[ 110], 80.00th=[ 120], 90.00th=[ 144], 95.00th=[ 148], 00:19:23.126 | 99.00th=[ 165], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 190], 00:19:23.126 | 99.99th=[ 190] 00:19:23.126 bw ( KiB/s): min= 400, max= 930, per=4.08%, avg=668.89, stdev=133.57, samples=19 00:19:23.126 iops : min= 100, max= 232, avg=167.16, stdev=33.30, samples=19 00:19:23.126 lat (msec) : 50=5.65%, 100=52.44%, 250=41.90% 00:19:23.126 cpu : usr=29.83%, sys=1.95%, ctx=891, majf=0, minf=9 00:19:23.126 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:23.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.127 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.127 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.127 filename2: (groupid=0, jobs=1): err= 0: pid=82843: Fri Jul 12 16:23:05 2024 00:19:23.127 read: IOPS=172, BW=691KiB/s (708kB/s)(6952KiB/10056msec) 00:19:23.127 slat (usec): min=3, max=16023, avg=37.24, stdev=449.20 00:19:23.127 clat (msec): min=9, max=216, avg=92.23, stdev=32.13 00:19:23.127 lat (msec): min=9, max=216, avg=92.26, stdev=32.14 00:19:23.127 clat percentiles (msec): 00:19:23.127 | 1.00th=[ 10], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 68], 00:19:23.127 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 92], 60.00th=[ 107], 00:19:23.127 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 144], 00:19:23.127 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 218], 00:19:23.127 | 99.99th=[ 218] 00:19:23.127 bw ( KiB/s): min= 488, max= 1142, per=4.22%, avg=691.10, stdev=153.38, samples=20 00:19:23.127 iops : min= 122, max= 285, avg=172.75, stdev=38.27, samples=20 00:19:23.127 lat (msec) : 10=1.04%, 20=1.61%, 50=5.75%, 100=47.58%, 250=44.02% 00:19:23.127 cpu : usr=29.59%, sys=5.98%, ctx=893, majf=0, minf=9 00:19:23.127 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=78.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:23.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.127 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.127 issued rwts: total=1738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.127 00:19:23.127 Run status group 0 (all jobs): 00:19:23.127 READ: bw=16.0MiB/s (16.8MB/s), 626KiB/s-785KiB/s (641kB/s-804kB/s), io=161MiB (169MB), run=10001-10071msec 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 bdev_null0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 [2024-07-12 16:23:05.455170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 bdev_null1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.127 { 00:19:23.127 "params": { 00:19:23.127 "name": "Nvme$subsystem", 00:19:23.127 "trtype": "$TEST_TRANSPORT", 00:19:23.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.127 "adrfam": "ipv4", 00:19:23.127 "trsvcid": "$NVMF_PORT", 00:19:23.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.127 "hdgst": ${hdgst:-false}, 00:19:23.127 "ddgst": ${ddgst:-false} 00:19:23.127 }, 00:19:23.127 "method": "bdev_nvme_attach_controller" 00:19:23.127 } 00:19:23.127 EOF 00:19:23.127 )") 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.127 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.128 { 00:19:23.128 "params": { 00:19:23.128 "name": "Nvme$subsystem", 00:19:23.128 "trtype": "$TEST_TRANSPORT", 00:19:23.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.128 "adrfam": "ipv4", 00:19:23.128 "trsvcid": "$NVMF_PORT", 00:19:23.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.128 "hdgst": ${hdgst:-false}, 00:19:23.128 "ddgst": ${ddgst:-false} 00:19:23.128 }, 00:19:23.128 "method": "bdev_nvme_attach_controller" 00:19:23.128 } 00:19:23.128 EOF 00:19:23.128 )") 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:23.128 "params": { 00:19:23.128 "name": "Nvme0", 00:19:23.128 "trtype": "tcp", 00:19:23.128 "traddr": "10.0.0.2", 00:19:23.128 "adrfam": "ipv4", 00:19:23.128 "trsvcid": "4420", 00:19:23.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:23.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:23.128 "hdgst": false, 00:19:23.128 "ddgst": false 00:19:23.128 }, 00:19:23.128 "method": "bdev_nvme_attach_controller" 00:19:23.128 },{ 00:19:23.128 "params": { 00:19:23.128 "name": "Nvme1", 00:19:23.128 "trtype": "tcp", 00:19:23.128 "traddr": "10.0.0.2", 00:19:23.128 "adrfam": "ipv4", 00:19:23.128 "trsvcid": "4420", 00:19:23.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.128 "hdgst": false, 00:19:23.128 "ddgst": false 00:19:23.128 }, 00:19:23.128 "method": "bdev_nvme_attach_controller" 00:19:23.128 }' 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.128 16:23:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.128 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:23.128 ... 00:19:23.128 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:23.128 ... 00:19:23.128 fio-3.35 00:19:23.128 Starting 4 threads 00:19:28.390 00:19:28.390 filename0: (groupid=0, jobs=1): err= 0: pid=82976: Fri Jul 12 16:23:11 2024 00:19:28.390 read: IOPS=2036, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5002msec) 00:19:28.390 slat (nsec): min=7382, max=61224, avg=15884.80, stdev=4584.91 00:19:28.390 clat (usec): min=1202, max=6541, avg=3876.98, stdev=683.01 00:19:28.390 lat (usec): min=1212, max=6568, avg=3892.86, stdev=682.78 00:19:28.390 clat percentiles (usec): 00:19:28.390 | 1.00th=[ 1926], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3261], 00:19:28.390 | 30.00th=[ 3458], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4080], 00:19:28.390 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 4948], 00:19:28.390 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 5735], 99.95th=[ 5735], 00:19:28.390 | 99.99th=[ 6390] 00:19:28.390 bw ( KiB/s): min=15104, max=17408, per=24.24%, avg=16289.50, stdev=850.71, samples=10 00:19:28.390 iops : min= 1888, max= 2176, avg=2036.10, stdev=106.43, samples=10 00:19:28.390 lat (msec) : 2=1.66%, 4=46.88%, 10=51.46% 00:19:28.390 cpu : usr=91.00%, sys=8.12%, ctx=47, majf=0, minf=10 00:19:28.390 IO depths : 1=0.1%, 2=11.8%, 4=60.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.390 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.390 issued rwts: total=10187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.390 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:28.390 filename0: (groupid=0, jobs=1): err= 0: pid=82977: Fri Jul 12 16:23:11 2024 00:19:28.390 read: IOPS=2035, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5001msec) 00:19:28.390 slat (nsec): min=7450, max=58607, avg=16073.72, stdev=4743.05 00:19:28.390 clat (usec): min=1235, max=7037, avg=3879.19, stdev=679.82 00:19:28.390 lat (usec): min=1249, max=7063, avg=3895.26, stdev=679.65 00:19:28.390 clat percentiles (usec): 00:19:28.390 | 1.00th=[ 1926], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3261], 00:19:28.390 | 30.00th=[ 3458], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4080], 00:19:28.390 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 4948], 00:19:28.390 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6128], 99.95th=[ 6259], 00:19:28.390 | 99.99th=[ 6390] 00:19:28.390 bw ( KiB/s): min=15104, max=17408, per=24.13%, avg=16218.67, stdev=880.00, samples=9 00:19:28.390 iops : min= 1888, max= 2176, avg=2027.33, stdev=110.00, samples=9 00:19:28.390 lat (msec) : 2=1.53%, 4=47.02%, 10=51.45% 00:19:28.390 cpu : usr=91.56%, sys=7.60%, ctx=26, majf=0, minf=9 00:19:28.390 IO depths : 1=0.1%, 2=11.8%, 4=60.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.391 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.391 issued rwts: total=10179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.391 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:28.391 filename1: (groupid=0, jobs=1): err= 0: pid=82978: Fri Jul 12 16:23:11 2024 00:19:28.391 read: IOPS=2094, BW=16.4MiB/s (17.2MB/s)(81.8MiB/5002msec) 00:19:28.391 slat (usec): min=3, max=172, avg=11.86, stdev= 5.98 00:19:28.391 clat (usec): min=620, max=8014, avg=3782.49, stdev=928.75 00:19:28.391 lat (usec): min=628, max=8027, avg=3794.35, stdev=929.35 00:19:28.391 clat percentiles (usec): 00:19:28.391 | 1.00th=[ 1336], 5.00th=[ 1467], 10.00th=[ 2933], 20.00th=[ 3195], 00:19:28.391 | 30.00th=[ 3326], 40.00th=[ 3720], 50.00th=[ 3949], 60.00th=[ 4047], 00:19:28.391 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5080], 00:19:28.391 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6849], 99.95th=[ 7898], 00:19:28.391 | 99.99th=[ 7898] 00:19:28.391 bw ( KiB/s): min=15104, max=21184, per=24.92%, avg=16750.22, stdev=1948.27, samples=9 00:19:28.391 iops : min= 1888, max= 2648, avg=2093.78, stdev=243.53, samples=9 00:19:28.391 lat (usec) : 750=0.08%, 1000=0.03% 00:19:28.391 lat (msec) : 2=7.06%, 4=45.78%, 10=47.05% 00:19:28.391 cpu : usr=91.34%, sys=7.36%, ctx=135, majf=0, minf=9 00:19:28.391 IO depths : 1=0.1%, 2=9.5%, 4=61.3%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.391 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.391 issued rwts: total=10475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.391 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:28.391 filename1: (groupid=0, jobs=1): err= 0: pid=82979: Fri Jul 12 16:23:11 2024 00:19:28.391 read: IOPS=2235, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5002msec) 00:19:28.391 slat (nsec): min=4683, max=57180, avg=13627.26, stdev=5204.08 00:19:28.391 clat (usec): min=1199, max=6484, avg=3541.07, stdev=952.83 00:19:28.391 lat (usec): min=1210, max=6500, avg=3554.70, stdev=953.71 00:19:28.391 clat percentiles (usec): 00:19:28.391 | 1.00th=[ 1319], 5.00th=[ 1401], 10.00th=[ 1975], 20.00th=[ 3032], 00:19:28.391 | 30.00th=[ 3195], 40.00th=[ 3326], 50.00th=[ 3720], 60.00th=[ 3916], 00:19:28.391 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 4948], 00:19:28.391 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5538], 99.95th=[ 5538], 00:19:28.391 | 99.99th=[ 6390] 00:19:28.391 bw ( KiB/s): min=15232, max=21376, per=26.78%, avg=18000.00, stdev=2190.86, samples=9 00:19:28.391 iops : min= 1904, max= 2672, avg=2250.00, stdev=273.86, samples=9 00:19:28.391 lat (msec) : 2=10.17%, 4=55.30%, 10=34.53% 00:19:28.391 cpu : usr=90.38%, sys=8.58%, ctx=551, majf=0, minf=9 00:19:28.391 IO depths : 1=0.1%, 2=4.8%, 4=63.7%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.391 complete : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.391 issued rwts: total=11181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.391 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:28.391 00:19:28.391 Run status group 0 (all jobs): 00:19:28.391 READ: bw=65.6MiB/s (68.8MB/s), 15.9MiB/s-17.5MiB/s (16.7MB/s-18.3MB/s), io=328MiB (344MB), run=5001-5002msec 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 ************************************ 00:19:28.391 END TEST fio_dif_rand_params 00:19:28.391 ************************************ 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 00:19:28.391 real 0m23.029s 00:19:28.391 user 1m54.755s 00:19:28.391 sys 0m9.356s 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 16:23:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:28.391 16:23:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:28.391 16:23:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:28.391 16:23:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 ************************************ 00:19:28.391 START TEST fio_dif_digest 00:19:28.391 ************************************ 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 bdev_null0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:28.391 [2024-07-12 16:23:11.481667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:28.391 { 00:19:28.391 "params": { 00:19:28.391 "name": "Nvme$subsystem", 00:19:28.391 "trtype": "$TEST_TRANSPORT", 00:19:28.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:28.391 "adrfam": "ipv4", 00:19:28.391 "trsvcid": "$NVMF_PORT", 00:19:28.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:28.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:28.391 "hdgst": ${hdgst:-false}, 00:19:28.391 "ddgst": ${ddgst:-false} 00:19:28.391 }, 00:19:28.391 "method": "bdev_nvme_attach_controller" 00:19:28.391 } 00:19:28.391 EOF 00:19:28.391 )") 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.391 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:28.392 "params": { 00:19:28.392 "name": "Nvme0", 00:19:28.392 "trtype": "tcp", 00:19:28.392 "traddr": "10.0.0.2", 00:19:28.392 "adrfam": "ipv4", 00:19:28.392 "trsvcid": "4420", 00:19:28.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.392 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:28.392 "hdgst": true, 00:19:28.392 "ddgst": true 00:19:28.392 }, 00:19:28.392 "method": "bdev_nvme_attach_controller" 00:19:28.392 }' 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:28.392 16:23:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:28.392 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:28.392 ... 00:19:28.392 fio-3.35 00:19:28.392 Starting 3 threads 00:19:40.597 00:19:40.597 filename0: (groupid=0, jobs=1): err= 0: pid=83084: Fri Jul 12 16:23:22 2024 00:19:40.597 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(290MiB/10004msec) 00:19:40.597 slat (nsec): min=7165, max=43344, avg=10542.86, stdev=4403.80 00:19:40.597 clat (usec): min=9358, max=15185, avg=12932.40, stdev=572.74 00:19:40.597 lat (usec): min=9366, max=15211, avg=12942.94, stdev=573.22 00:19:40.597 clat percentiles (usec): 00:19:40.597 | 1.00th=[11994], 5.00th=[12387], 10.00th=[12387], 20.00th=[12518], 00:19:40.597 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:40.597 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[13960], 00:19:40.597 | 99.00th=[14222], 99.50th=[14353], 99.90th=[15139], 99.95th=[15139], 00:19:40.597 | 99.99th=[15139] 00:19:40.597 bw ( KiB/s): min=29184, max=30720, per=33.31%, avg=29609.30, stdev=461.89, samples=20 00:19:40.597 iops : min= 228, max= 240, avg=231.30, stdev= 3.63, samples=20 00:19:40.597 lat (msec) : 10=0.13%, 20=99.87% 00:19:40.597 cpu : usr=91.59%, sys=7.86%, ctx=103, majf=0, minf=9 00:19:40.597 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.597 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:40.597 filename0: (groupid=0, jobs=1): err= 0: pid=83085: Fri Jul 12 16:23:22 2024 00:19:40.598 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(290MiB/10005msec) 00:19:40.598 slat (nsec): min=7169, max=48516, avg=10516.01, stdev=4492.66 00:19:40.598 clat (usec): min=10842, max=14385, avg=12932.68, stdev=560.24 00:19:40.598 lat (usec): min=10850, max=14399, avg=12943.20, stdev=560.85 00:19:40.598 clat percentiles (usec): 00:19:40.598 | 1.00th=[12125], 5.00th=[12387], 10.00th=[12387], 20.00th=[12518], 00:19:40.598 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:40.598 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:19:40.598 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14353], 99.95th=[14353], 00:19:40.598 | 99.99th=[14353] 00:19:40.598 bw ( KiB/s): min=29184, max=30720, per=33.31%, avg=29606.40, stdev=464.49, samples=20 00:19:40.598 iops : min= 228, max= 240, avg=231.30, stdev= 3.63, samples=20 00:19:40.598 lat (msec) : 20=100.00% 00:19:40.598 cpu : usr=91.69%, sys=7.79%, ctx=10, majf=0, minf=0 00:19:40.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.598 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:40.598 filename0: (groupid=0, jobs=1): err= 0: pid=83086: Fri Jul 12 16:23:22 2024 00:19:40.598 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(290MiB/10006msec) 00:19:40.598 slat (nsec): min=7230, max=73445, avg=11744.90, stdev=6829.17 00:19:40.598 clat (usec): min=11946, max=14515, avg=12930.97, stdev=553.88 00:19:40.598 lat (usec): min=11953, max=14561, avg=12942.72, stdev=555.30 00:19:40.598 clat percentiles (usec): 00:19:40.598 | 1.00th=[11994], 5.00th=[12387], 10.00th=[12387], 20.00th=[12518], 00:19:40.598 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:40.598 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:19:40.598 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:19:40.598 | 99.99th=[14484] 00:19:40.598 bw ( KiB/s): min=29184, max=30720, per=33.31%, avg=29606.40, stdev=464.49, samples=20 00:19:40.598 iops : min= 228, max= 240, avg=231.30, stdev= 3.63, samples=20 00:19:40.598 lat (msec) : 20=100.00% 00:19:40.598 cpu : usr=91.35%, sys=8.06%, ctx=13, majf=0, minf=0 00:19:40.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.598 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:40.598 00:19:40.598 Run status group 0 (all jobs): 00:19:40.598 READ: bw=86.8MiB/s (91.0MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=869MiB (911MB), run=10004-10006msec 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:40.598 ************************************ 00:19:40.598 END TEST fio_dif_digest 00:19:40.598 ************************************ 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.598 00:19:40.598 real 0m10.872s 00:19:40.598 user 0m28.045s 00:19:40.598 sys 0m2.596s 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.598 16:23:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:40.598 16:23:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:40.598 16:23:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.598 rmmod nvme_tcp 00:19:40.598 rmmod nvme_fabrics 00:19:40.598 rmmod nvme_keyring 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82340 ']' 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82340 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 82340 ']' 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 82340 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82340 00:19:40.598 killing process with pid 82340 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82340' 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@967 -- # kill 82340 00:19:40.598 16:23:22 nvmf_dif -- common/autotest_common.sh@972 -- # wait 82340 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:19:40.598 16:23:22 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:40.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:40.598 Waiting for block devices as requested 00:19:40.598 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:40.598 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:40.598 16:23:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.598 16:23:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.598 16:23:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.598 16:23:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.598 16:23:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.598 16:23:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:40.598 16:23:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.598 16:23:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:40.598 ************************************ 00:19:40.598 END TEST nvmf_dif 00:19:40.598 ************************************ 00:19:40.598 00:19:40.598 real 0m58.935s 00:19:40.598 user 3m37.007s 00:19:40.598 sys 0m20.503s 00:19:40.598 16:23:23 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.598 16:23:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:40.598 16:23:23 -- common/autotest_common.sh@1142 -- # return 0 00:19:40.598 16:23:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:40.598 16:23:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:40.598 16:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.598 16:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:40.598 ************************************ 00:19:40.598 START TEST nvmf_abort_qd_sizes 00:19:40.598 ************************************ 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:40.598 * Looking for test storage... 00:19:40.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.598 16:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.599 Cannot find device "nvmf_tgt_br" 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.599 Cannot find device "nvmf_tgt_br2" 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.599 Cannot find device "nvmf_tgt_br" 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.599 Cannot find device "nvmf_tgt_br2" 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:19:40.599 00:19:40.599 --- 10.0.0.2 ping statistics --- 00:19:40.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.599 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:19:40.599 00:19:40.599 --- 10.0.0.3 ping statistics --- 00:19:40.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.599 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:40.599 00:19:40.599 --- 10.0.0.1 ping statistics --- 00:19:40.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.599 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:40.599 16:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:40.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.115 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:41.115 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83689 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83689 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 83689 ']' 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.115 16:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:41.374 [2024-07-12 16:23:24.867851] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:19:41.374 [2024-07-12 16:23:24.867983] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.374 [2024-07-12 16:23:25.009396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.374 [2024-07-12 16:23:25.078619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.374 [2024-07-12 16:23:25.078681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.374 [2024-07-12 16:23:25.078696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.374 [2024-07-12 16:23:25.078706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.374 [2024-07-12 16:23:25.078715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.374 [2024-07-12 16:23:25.079681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.374 [2024-07-12 16:23:25.079910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.374 [2024-07-12 16:23:25.080001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.374 [2024-07-12 16:23:25.080009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.632 [2024-07-12 16:23:25.112625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:41.632 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.633 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 ************************************ 00:19:41.633 START TEST spdk_target_abort 00:19:41.633 ************************************ 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 spdk_targetn1 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 [2024-07-12 16:23:25.327551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.633 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 [2024-07-12 16:23:25.355965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:41.891 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:45.172 Initializing NVMe Controllers 00:19:45.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:45.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:45.172 Initialization complete. Launching workers. 00:19:45.172 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10011, failed: 0 00:19:45.172 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1038, failed to submit 8973 00:19:45.172 success 840, unsuccess 198, failed 0 00:19:45.172 16:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:45.172 16:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:48.457 Initializing NVMe Controllers 00:19:48.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:48.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:48.457 Initialization complete. Launching workers. 00:19:48.457 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9001, failed: 0 00:19:48.457 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1163, failed to submit 7838 00:19:48.457 success 385, unsuccess 778, failed 0 00:19:48.457 16:23:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:48.457 16:23:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:51.785 Initializing NVMe Controllers 00:19:51.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:51.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:51.785 Initialization complete. Launching workers. 00:19:51.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30223, failed: 0 00:19:51.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2315, failed to submit 27908 00:19:51.785 success 437, unsuccess 1878, failed 0 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.785 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83689 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 83689 ']' 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 83689 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83689 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83689' 00:19:52.043 killing process with pid 83689 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 83689 00:19:52.043 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 83689 00:19:52.302 00:19:52.302 real 0m10.627s 00:19:52.302 user 0m38.053s 00:19:52.302 sys 0m2.904s 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.302 ************************************ 00:19:52.302 END TEST spdk_target_abort 00:19:52.302 ************************************ 00:19:52.302 16:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:19:52.302 16:23:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:19:52.302 16:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.302 16:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.302 16:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:52.302 ************************************ 00:19:52.302 START TEST kernel_target_abort 00:19:52.302 ************************************ 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:52.302 16:23:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:52.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:52.870 Waiting for block devices as requested 00:19:52.870 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.870 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:52.870 No valid GPT data, bailing 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:52.870 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:53.130 No valid GPT data, bailing 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:53.130 No valid GPT data, bailing 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:53.130 No valid GPT data, bailing 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 --hostid=0f8ee936-81ee-4845-9dc2-94c8381dda10 -a 10.0.0.1 -t tcp -s 4420 00:19:53.130 00:19:53.130 Discovery Log Number of Records 2, Generation counter 2 00:19:53.130 =====Discovery Log Entry 0====== 00:19:53.130 trtype: tcp 00:19:53.130 adrfam: ipv4 00:19:53.130 subtype: current discovery subsystem 00:19:53.130 treq: not specified, sq flow control disable supported 00:19:53.130 portid: 1 00:19:53.130 trsvcid: 4420 00:19:53.130 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:53.130 traddr: 10.0.0.1 00:19:53.130 eflags: none 00:19:53.130 sectype: none 00:19:53.130 =====Discovery Log Entry 1====== 00:19:53.130 trtype: tcp 00:19:53.130 adrfam: ipv4 00:19:53.130 subtype: nvme subsystem 00:19:53.130 treq: not specified, sq flow control disable supported 00:19:53.130 portid: 1 00:19:53.130 trsvcid: 4420 00:19:53.130 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:53.130 traddr: 10.0.0.1 00:19:53.130 eflags: none 00:19:53.130 sectype: none 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:53.130 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:53.131 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:56.419 Initializing NVMe Controllers 00:19:56.419 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:56.419 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:56.419 Initialization complete. Launching workers. 00:19:56.419 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32638, failed: 0 00:19:56.419 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32638, failed to submit 0 00:19:56.419 success 0, unsuccess 32638, failed 0 00:19:56.419 16:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:56.419 16:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:59.703 Initializing NVMe Controllers 00:19:59.703 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:59.703 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:59.703 Initialization complete. Launching workers. 00:19:59.703 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66839, failed: 0 00:19:59.703 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28664, failed to submit 38175 00:19:59.703 success 0, unsuccess 28664, failed 0 00:19:59.703 16:23:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:59.703 16:23:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:02.984 Initializing NVMe Controllers 00:20:02.984 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:02.984 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:02.984 Initialization complete. Launching workers. 00:20:02.984 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75855, failed: 0 00:20:02.984 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18982, failed to submit 56873 00:20:02.984 success 0, unsuccess 18982, failed 0 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:02.984 16:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:03.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:04.931 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.931 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:05.205 00:20:05.205 real 0m12.747s 00:20:05.205 user 0m6.070s 00:20:05.205 sys 0m4.116s 00:20:05.205 16:23:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.205 16:23:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:05.205 ************************************ 00:20:05.205 END TEST kernel_target_abort 00:20:05.205 ************************************ 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.205 rmmod nvme_tcp 00:20:05.205 rmmod nvme_fabrics 00:20:05.205 rmmod nvme_keyring 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83689 ']' 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83689 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 83689 ']' 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 83689 00:20:05.205 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83689) - No such process 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 83689 is not found' 00:20:05.205 Process with pid 83689 is not found 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:05.205 16:23:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:05.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.782 Waiting for block devices as requested 00:20:05.782 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:05.782 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:05.782 16:23:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.041 16:23:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.041 00:20:06.041 real 0m26.066s 00:20:06.041 user 0m45.091s 00:20:06.041 sys 0m8.343s 00:20:06.041 16:23:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.041 16:23:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:06.041 ************************************ 00:20:06.041 END TEST nvmf_abort_qd_sizes 00:20:06.041 ************************************ 00:20:06.041 16:23:49 -- common/autotest_common.sh@1142 -- # return 0 00:20:06.041 16:23:49 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:06.041 16:23:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:06.041 16:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.041 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:20:06.041 ************************************ 00:20:06.041 START TEST keyring_file 00:20:06.041 ************************************ 00:20:06.041 16:23:49 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:06.041 * Looking for test storage... 00:20:06.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.041 16:23:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.041 16:23:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.041 16:23:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.041 16:23:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.041 16:23:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.041 16:23:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.041 16:23:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:06.041 16:23:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@47 -- # : 0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.27ccV4Yk6L 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.27ccV4Yk6L 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.27ccV4Yk6L 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.27ccV4Yk6L 00:20:06.041 16:23:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cM1GkPHYvX 00:20:06.041 16:23:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:06.041 16:23:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:06.298 16:23:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cM1GkPHYvX 00:20:06.298 16:23:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cM1GkPHYvX 00:20:06.298 16:23:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cM1GkPHYvX 00:20:06.298 16:23:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=84569 00:20:06.298 16:23:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.298 16:23:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84569 00:20:06.298 16:23:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84569 ']' 00:20:06.298 16:23:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.298 16:23:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.298 16:23:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.298 16:23:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.298 16:23:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:06.298 [2024-07-12 16:23:49.872463] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:20:06.298 [2024-07-12 16:23:49.872589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84569 ] 00:20:06.298 [2024-07-12 16:23:50.003375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.557 [2024-07-12 16:23:50.060389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.557 [2024-07-12 16:23:50.087376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:06.557 16:23:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:06.557 [2024-07-12 16:23:50.210927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.557 null0 00:20:06.557 [2024-07-12 16:23:50.242891] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.557 [2024-07-12 16:23:50.243109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:06.557 [2024-07-12 16:23:50.250895] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.557 16:23:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:06.557 [2024-07-12 16:23:50.262914] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:06.557 request: 00:20:06.557 { 00:20:06.557 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.557 "secure_channel": false, 00:20:06.557 "listen_address": { 00:20:06.557 "trtype": "tcp", 00:20:06.557 "traddr": "127.0.0.1", 00:20:06.557 "trsvcid": "4420" 00:20:06.557 }, 00:20:06.557 "method": "nvmf_subsystem_add_listener", 00:20:06.557 "req_id": 1 00:20:06.557 } 00:20:06.557 Got JSON-RPC error response 00:20:06.557 response: 00:20:06.557 { 00:20:06.557 "code": -32602, 00:20:06.557 "message": "Invalid parameters" 00:20:06.557 } 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.557 16:23:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=84573 00:20:06.557 16:23:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84573 /var/tmp/bperf.sock 00:20:06.557 16:23:50 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84573 ']' 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.557 16:23:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:06.814 [2024-07-12 16:23:50.326366] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:20:06.814 [2024-07-12 16:23:50.326468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84573 ] 00:20:06.814 [2024-07-12 16:23:50.467103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.814 [2024-07-12 16:23:50.536207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.071 [2024-07-12 16:23:50.570119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:07.636 16:23:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.636 16:23:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:07.636 16:23:51 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:07.636 16:23:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:07.893 16:23:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cM1GkPHYvX 00:20:07.893 16:23:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cM1GkPHYvX 00:20:08.150 16:23:51 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:20:08.150 16:23:51 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:20:08.150 16:23:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:08.150 16:23:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:08.150 16:23:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:08.407 16:23:52 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.27ccV4Yk6L == \/\t\m\p\/\t\m\p\.\2\7\c\c\V\4\Y\k\6\L ]] 00:20:08.407 16:23:52 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:20:08.407 16:23:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:08.407 16:23:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:08.407 16:23:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:08.407 16:23:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:08.664 16:23:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cM1GkPHYvX == \/\t\m\p\/\t\m\p\.\c\M\1\G\k\P\H\Y\v\X ]] 00:20:08.664 16:23:52 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:20:08.664 16:23:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:08.664 16:23:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:08.664 16:23:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:08.664 16:23:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:08.664 16:23:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:08.921 16:23:52 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:20:08.921 16:23:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:20:08.921 16:23:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:08.921 16:23:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:08.921 16:23:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:08.921 16:23:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:08.921 16:23:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:09.231 16:23:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:09.231 16:23:52 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:09.231 16:23:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:09.488 [2024-07-12 16:23:53.050810] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.488 nvme0n1 00:20:09.488 16:23:53 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:20:09.488 16:23:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:09.488 16:23:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:09.488 16:23:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:09.488 16:23:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:09.488 16:23:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:09.746 16:23:53 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:20:09.746 16:23:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:20:09.746 16:23:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:09.746 16:23:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:09.746 16:23:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:09.746 16:23:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:09.746 16:23:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:10.003 16:23:53 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:20:10.003 16:23:53 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:10.260 Running I/O for 1 seconds... 00:20:11.193 00:20:11.193 Latency(us) 00:20:11.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.193 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:11.193 nvme0n1 : 1.01 10904.42 42.60 0.00 0.00 11692.35 6583.39 24188.74 00:20:11.193 =================================================================================================================== 00:20:11.193 Total : 10904.42 42.60 0.00 0.00 11692.35 6583.39 24188.74 00:20:11.193 0 00:20:11.193 16:23:54 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:11.193 16:23:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:11.451 16:23:55 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:20:11.451 16:23:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:11.451 16:23:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:11.451 16:23:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:11.451 16:23:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:11.451 16:23:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:11.709 16:23:55 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:20:11.709 16:23:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:20:11.709 16:23:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:11.709 16:23:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:11.709 16:23:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:11.709 16:23:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:11.709 16:23:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:11.968 16:23:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:11.968 16:23:55 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.968 16:23:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:11.968 16:23:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:12.226 [2024-07-12 16:23:55.928427] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.226 [2024-07-12 16:23:55.929295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164ec20 (107): Transport endpoint is not connected 00:20:12.226 [2024-07-12 16:23:55.930283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164ec20 (9): Bad file descriptor 00:20:12.226 [2024-07-12 16:23:55.931281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:12.226 [2024-07-12 16:23:55.931308] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:12.226 [2024-07-12 16:23:55.931318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:12.226 request: 00:20:12.226 { 00:20:12.226 "name": "nvme0", 00:20:12.226 "trtype": "tcp", 00:20:12.226 "traddr": "127.0.0.1", 00:20:12.226 "adrfam": "ipv4", 00:20:12.226 "trsvcid": "4420", 00:20:12.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:12.226 "prchk_reftag": false, 00:20:12.226 "prchk_guard": false, 00:20:12.226 "hdgst": false, 00:20:12.226 "ddgst": false, 00:20:12.226 "psk": "key1", 00:20:12.226 "method": "bdev_nvme_attach_controller", 00:20:12.226 "req_id": 1 00:20:12.226 } 00:20:12.226 Got JSON-RPC error response 00:20:12.226 response: 00:20:12.226 { 00:20:12.226 "code": -5, 00:20:12.226 "message": "Input/output error" 00:20:12.226 } 00:20:12.226 16:23:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:12.226 16:23:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.226 16:23:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.226 16:23:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.484 16:23:55 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:20:12.484 16:23:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:12.484 16:23:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:12.484 16:23:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:12.484 16:23:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:12.484 16:23:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:12.743 16:23:56 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:20:12.743 16:23:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:20:12.743 16:23:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:12.743 16:23:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:12.743 16:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:12.743 16:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:12.743 16:23:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:13.000 16:23:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:13.001 16:23:56 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:20:13.001 16:23:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:13.259 16:23:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:20:13.259 16:23:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:13.517 16:23:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:20:13.517 16:23:57 keyring_file -- keyring/file.sh@77 -- # jq length 00:20:13.517 16:23:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:13.775 16:23:57 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:20:13.775 16:23:57 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.27ccV4Yk6L 00:20:13.775 16:23:57 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.775 16:23:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:13.775 16:23:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:14.034 [2024-07-12 16:23:57.597866] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.27ccV4Yk6L': 0100660 00:20:14.034 [2024-07-12 16:23:57.597924] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:14.034 request: 00:20:14.034 { 00:20:14.034 "name": "key0", 00:20:14.034 "path": "/tmp/tmp.27ccV4Yk6L", 00:20:14.034 "method": "keyring_file_add_key", 00:20:14.034 "req_id": 1 00:20:14.034 } 00:20:14.034 Got JSON-RPC error response 00:20:14.034 response: 00:20:14.034 { 00:20:14.034 "code": -1, 00:20:14.034 "message": "Operation not permitted" 00:20:14.034 } 00:20:14.034 16:23:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:14.034 16:23:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.034 16:23:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.034 16:23:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.034 16:23:57 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.27ccV4Yk6L 00:20:14.034 16:23:57 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:14.034 16:23:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.27ccV4Yk6L 00:20:14.292 16:23:57 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.27ccV4Yk6L 00:20:14.292 16:23:57 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:20:14.292 16:23:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:14.292 16:23:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:14.292 16:23:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:14.292 16:23:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:14.292 16:23:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:14.549 16:23:58 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:20:14.549 16:23:58 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.549 16:23:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:14.550 16:23:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:14.807 [2024-07-12 16:23:58.474096] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.27ccV4Yk6L': No such file or directory 00:20:14.807 [2024-07-12 16:23:58.474141] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:14.807 [2024-07-12 16:23:58.474180] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:14.807 [2024-07-12 16:23:58.474188] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:14.807 [2024-07-12 16:23:58.474196] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:14.807 request: 00:20:14.807 { 00:20:14.807 "name": "nvme0", 00:20:14.807 "trtype": "tcp", 00:20:14.807 "traddr": "127.0.0.1", 00:20:14.807 "adrfam": "ipv4", 00:20:14.807 "trsvcid": "4420", 00:20:14.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:14.807 "prchk_reftag": false, 00:20:14.807 "prchk_guard": false, 00:20:14.807 "hdgst": false, 00:20:14.807 "ddgst": false, 00:20:14.807 "psk": "key0", 00:20:14.807 "method": "bdev_nvme_attach_controller", 00:20:14.807 "req_id": 1 00:20:14.807 } 00:20:14.807 Got JSON-RPC error response 00:20:14.807 response: 00:20:14.807 { 00:20:14.807 "code": -19, 00:20:14.807 "message": "No such device" 00:20:14.807 } 00:20:14.807 16:23:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:14.807 16:23:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.807 16:23:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.807 16:23:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.807 16:23:58 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:20:14.807 16:23:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:15.066 16:23:58 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cWXlb6kSkR 00:20:15.066 16:23:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:15.066 16:23:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:15.066 16:23:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.066 16:23:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:15.066 16:23:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:15.066 16:23:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:15.066 16:23:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:15.324 16:23:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cWXlb6kSkR 00:20:15.324 16:23:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cWXlb6kSkR 00:20:15.324 16:23:58 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.cWXlb6kSkR 00:20:15.324 16:23:58 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cWXlb6kSkR 00:20:15.324 16:23:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cWXlb6kSkR 00:20:15.324 16:23:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.324 16:23:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.892 nvme0n1 00:20:15.892 16:23:59 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:20:15.892 16:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:15.892 16:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:15.892 16:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:15.892 16:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:15.892 16:23:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:16.151 16:23:59 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:20:16.151 16:23:59 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:20:16.151 16:23:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:16.151 16:23:59 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:20:16.151 16:23:59 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:20:16.151 16:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:16.151 16:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:16.151 16:23:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:16.717 16:24:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:20:16.717 16:24:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:20:16.717 16:24:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:16.717 16:24:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:16.717 16:24:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:16.717 16:24:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:16.717 16:24:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:16.975 16:24:00 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:20:16.975 16:24:00 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:16.975 16:24:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:17.233 16:24:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:20:17.233 16:24:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:17.233 16:24:00 keyring_file -- keyring/file.sh@104 -- # jq length 00:20:17.233 16:24:00 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:20:17.233 16:24:00 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cWXlb6kSkR 00:20:17.233 16:24:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cWXlb6kSkR 00:20:17.800 16:24:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cM1GkPHYvX 00:20:17.800 16:24:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cM1GkPHYvX 00:20:17.800 16:24:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:17.800 16:24:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:18.070 nvme0n1 00:20:18.379 16:24:01 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:20:18.379 16:24:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:18.643 16:24:02 keyring_file -- keyring/file.sh@112 -- # config='{ 00:20:18.644 "subsystems": [ 00:20:18.644 { 00:20:18.644 "subsystem": "keyring", 00:20:18.644 "config": [ 00:20:18.644 { 00:20:18.644 "method": "keyring_file_add_key", 00:20:18.644 "params": { 00:20:18.644 "name": "key0", 00:20:18.644 "path": "/tmp/tmp.cWXlb6kSkR" 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "keyring_file_add_key", 00:20:18.644 "params": { 00:20:18.644 "name": "key1", 00:20:18.644 "path": "/tmp/tmp.cM1GkPHYvX" 00:20:18.644 } 00:20:18.644 } 00:20:18.644 ] 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "subsystem": "iobuf", 00:20:18.644 "config": [ 00:20:18.644 { 00:20:18.644 "method": "iobuf_set_options", 00:20:18.644 "params": { 00:20:18.644 "small_pool_count": 8192, 00:20:18.644 "large_pool_count": 1024, 00:20:18.644 "small_bufsize": 8192, 00:20:18.644 "large_bufsize": 135168 00:20:18.644 } 00:20:18.644 } 00:20:18.644 ] 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "subsystem": "sock", 00:20:18.644 "config": [ 00:20:18.644 { 00:20:18.644 "method": "sock_set_default_impl", 00:20:18.644 "params": { 00:20:18.644 "impl_name": "uring" 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "sock_impl_set_options", 00:20:18.644 "params": { 00:20:18.644 "impl_name": "ssl", 00:20:18.644 "recv_buf_size": 4096, 00:20:18.644 "send_buf_size": 4096, 00:20:18.644 "enable_recv_pipe": true, 00:20:18.644 "enable_quickack": false, 00:20:18.644 "enable_placement_id": 0, 00:20:18.644 "enable_zerocopy_send_server": true, 00:20:18.644 "enable_zerocopy_send_client": false, 00:20:18.644 "zerocopy_threshold": 0, 00:20:18.644 "tls_version": 0, 00:20:18.644 "enable_ktls": false 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "sock_impl_set_options", 00:20:18.644 "params": { 00:20:18.644 "impl_name": "posix", 00:20:18.644 "recv_buf_size": 2097152, 00:20:18.644 "send_buf_size": 2097152, 00:20:18.644 "enable_recv_pipe": true, 00:20:18.644 "enable_quickack": false, 00:20:18.644 "enable_placement_id": 0, 00:20:18.644 "enable_zerocopy_send_server": true, 00:20:18.644 "enable_zerocopy_send_client": false, 00:20:18.644 "zerocopy_threshold": 0, 00:20:18.644 "tls_version": 0, 00:20:18.644 "enable_ktls": false 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "sock_impl_set_options", 00:20:18.644 "params": { 00:20:18.644 "impl_name": "uring", 00:20:18.644 "recv_buf_size": 2097152, 00:20:18.644 "send_buf_size": 2097152, 00:20:18.644 "enable_recv_pipe": true, 00:20:18.644 "enable_quickack": false, 00:20:18.644 "enable_placement_id": 0, 00:20:18.644 "enable_zerocopy_send_server": false, 00:20:18.644 "enable_zerocopy_send_client": false, 00:20:18.644 "zerocopy_threshold": 0, 00:20:18.644 "tls_version": 0, 00:20:18.644 "enable_ktls": false 00:20:18.644 } 00:20:18.644 } 00:20:18.644 ] 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "subsystem": "vmd", 00:20:18.644 "config": [] 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "subsystem": "accel", 00:20:18.644 "config": [ 00:20:18.644 { 00:20:18.644 "method": "accel_set_options", 00:20:18.644 "params": { 00:20:18.644 "small_cache_size": 128, 00:20:18.644 "large_cache_size": 16, 00:20:18.644 "task_count": 2048, 00:20:18.644 "sequence_count": 2048, 00:20:18.644 "buf_count": 2048 00:20:18.644 } 00:20:18.644 } 00:20:18.644 ] 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "subsystem": "bdev", 00:20:18.644 "config": [ 00:20:18.644 { 00:20:18.644 "method": "bdev_set_options", 00:20:18.644 "params": { 00:20:18.644 "bdev_io_pool_size": 65535, 00:20:18.644 "bdev_io_cache_size": 256, 00:20:18.644 "bdev_auto_examine": true, 00:20:18.644 "iobuf_small_cache_size": 128, 00:20:18.644 "iobuf_large_cache_size": 16 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "bdev_raid_set_options", 00:20:18.644 "params": { 00:20:18.644 "process_window_size_kb": 1024 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "bdev_iscsi_set_options", 00:20:18.644 "params": { 00:20:18.644 "timeout_sec": 30 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "bdev_nvme_set_options", 00:20:18.644 "params": { 00:20:18.644 "action_on_timeout": "none", 00:20:18.644 "timeout_us": 0, 00:20:18.644 "timeout_admin_us": 0, 00:20:18.644 "keep_alive_timeout_ms": 10000, 00:20:18.644 "arbitration_burst": 0, 00:20:18.644 "low_priority_weight": 0, 00:20:18.644 "medium_priority_weight": 0, 00:20:18.644 "high_priority_weight": 0, 00:20:18.644 "nvme_adminq_poll_period_us": 10000, 00:20:18.644 "nvme_ioq_poll_period_us": 0, 00:20:18.644 "io_queue_requests": 512, 00:20:18.644 "delay_cmd_submit": true, 00:20:18.644 "transport_retry_count": 4, 00:20:18.644 "bdev_retry_count": 3, 00:20:18.644 "transport_ack_timeout": 0, 00:20:18.644 "ctrlr_loss_timeout_sec": 0, 00:20:18.644 "reconnect_delay_sec": 0, 00:20:18.644 "fast_io_fail_timeout_sec": 0, 00:20:18.644 "disable_auto_failback": false, 00:20:18.644 "generate_uuids": false, 00:20:18.644 "transport_tos": 0, 00:20:18.644 "nvme_error_stat": false, 00:20:18.644 "rdma_srq_size": 0, 00:20:18.644 "io_path_stat": false, 00:20:18.644 "allow_accel_sequence": false, 00:20:18.644 "rdma_max_cq_size": 0, 00:20:18.644 "rdma_cm_event_timeout_ms": 0, 00:20:18.644 "dhchap_digests": [ 00:20:18.644 "sha256", 00:20:18.644 "sha384", 00:20:18.644 "sha512" 00:20:18.644 ], 00:20:18.644 "dhchap_dhgroups": [ 00:20:18.644 "null", 00:20:18.644 "ffdhe2048", 00:20:18.644 "ffdhe3072", 00:20:18.644 "ffdhe4096", 00:20:18.644 "ffdhe6144", 00:20:18.644 "ffdhe8192" 00:20:18.644 ] 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "bdev_nvme_attach_controller", 00:20:18.644 "params": { 00:20:18.644 "name": "nvme0", 00:20:18.644 "trtype": "TCP", 00:20:18.644 "adrfam": "IPv4", 00:20:18.644 "traddr": "127.0.0.1", 00:20:18.644 "trsvcid": "4420", 00:20:18.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.644 "prchk_reftag": false, 00:20:18.644 "prchk_guard": false, 00:20:18.644 "ctrlr_loss_timeout_sec": 0, 00:20:18.644 "reconnect_delay_sec": 0, 00:20:18.644 "fast_io_fail_timeout_sec": 0, 00:20:18.644 "psk": "key0", 00:20:18.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.644 "hdgst": false, 00:20:18.644 "ddgst": false 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "bdev_nvme_set_hotplug", 00:20:18.644 "params": { 00:20:18.644 "period_us": 100000, 00:20:18.644 "enable": false 00:20:18.644 } 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "method": "bdev_wait_for_examine" 00:20:18.644 } 00:20:18.644 ] 00:20:18.644 }, 00:20:18.644 { 00:20:18.644 "subsystem": "nbd", 00:20:18.644 "config": [] 00:20:18.644 } 00:20:18.644 ] 00:20:18.644 }' 00:20:18.644 16:24:02 keyring_file -- keyring/file.sh@114 -- # killprocess 84573 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84573 ']' 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84573 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84573 00:20:18.644 killing process with pid 84573 00:20:18.644 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.644 00:20:18.644 Latency(us) 00:20:18.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.644 =================================================================================================================== 00:20:18.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84573' 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@967 -- # kill 84573 00:20:18.644 16:24:02 keyring_file -- common/autotest_common.sh@972 -- # wait 84573 00:20:18.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:18.645 16:24:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=84828 00:20:18.645 16:24:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84828 /var/tmp/bperf.sock 00:20:18.645 16:24:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84828 ']' 00:20:18.645 16:24:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:18.645 16:24:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.645 16:24:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:18.645 16:24:02 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:18.645 16:24:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.645 16:24:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:18.645 16:24:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:20:18.645 "subsystems": [ 00:20:18.645 { 00:20:18.645 "subsystem": "keyring", 00:20:18.645 "config": [ 00:20:18.645 { 00:20:18.645 "method": "keyring_file_add_key", 00:20:18.645 "params": { 00:20:18.645 "name": "key0", 00:20:18.645 "path": "/tmp/tmp.cWXlb6kSkR" 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "keyring_file_add_key", 00:20:18.645 "params": { 00:20:18.645 "name": "key1", 00:20:18.645 "path": "/tmp/tmp.cM1GkPHYvX" 00:20:18.645 } 00:20:18.645 } 00:20:18.645 ] 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "subsystem": "iobuf", 00:20:18.645 "config": [ 00:20:18.645 { 00:20:18.645 "method": "iobuf_set_options", 00:20:18.645 "params": { 00:20:18.645 "small_pool_count": 8192, 00:20:18.645 "large_pool_count": 1024, 00:20:18.645 "small_bufsize": 8192, 00:20:18.645 "large_bufsize": 135168 00:20:18.645 } 00:20:18.645 } 00:20:18.645 ] 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "subsystem": "sock", 00:20:18.645 "config": [ 00:20:18.645 { 00:20:18.645 "method": "sock_set_default_impl", 00:20:18.645 "params": { 00:20:18.645 "impl_name": "uring" 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "sock_impl_set_options", 00:20:18.645 "params": { 00:20:18.645 "impl_name": "ssl", 00:20:18.645 "recv_buf_size": 4096, 00:20:18.645 "send_buf_size": 4096, 00:20:18.645 "enable_recv_pipe": true, 00:20:18.645 "enable_quickack": false, 00:20:18.645 "enable_placement_id": 0, 00:20:18.645 "enable_zerocopy_send_server": true, 00:20:18.645 "enable_zerocopy_send_client": false, 00:20:18.645 "zerocopy_threshold": 0, 00:20:18.645 "tls_version": 0, 00:20:18.645 "enable_ktls": false 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "sock_impl_set_options", 00:20:18.645 "params": { 00:20:18.645 "impl_name": "posix", 00:20:18.645 "recv_buf_size": 2097152, 00:20:18.645 "send_buf_size": 2097152, 00:20:18.645 "enable_recv_pipe": true, 00:20:18.645 "enable_quickack": false, 00:20:18.645 "enable_placement_id": 0, 00:20:18.645 "enable_zerocopy_send_server": true, 00:20:18.645 "enable_zerocopy_send_client": false, 00:20:18.645 "zerocopy_threshold": 0, 00:20:18.645 "tls_version": 0, 00:20:18.645 "enable_ktls": false 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "sock_impl_set_options", 00:20:18.645 "params": { 00:20:18.645 "impl_name": "uring", 00:20:18.645 "recv_buf_size": 2097152, 00:20:18.645 "send_buf_size": 2097152, 00:20:18.645 "enable_recv_pipe": true, 00:20:18.645 "enable_quickack": false, 00:20:18.645 "enable_placement_id": 0, 00:20:18.645 "enable_zerocopy_send_server": false, 00:20:18.645 "enable_zerocopy_send_client": false, 00:20:18.645 "zerocopy_threshold": 0, 00:20:18.645 "tls_version": 0, 00:20:18.645 "enable_ktls": false 00:20:18.645 } 00:20:18.645 } 00:20:18.645 ] 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "subsystem": "vmd", 00:20:18.645 "config": [] 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "subsystem": "accel", 00:20:18.645 "config": [ 00:20:18.645 { 00:20:18.645 "method": "accel_set_options", 00:20:18.645 "params": { 00:20:18.645 "small_cache_size": 128, 00:20:18.645 "large_cache_size": 16, 00:20:18.645 "task_count": 2048, 00:20:18.645 "sequence_count": 2048, 00:20:18.645 "buf_count": 2048 00:20:18.645 } 00:20:18.645 } 00:20:18.645 ] 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "subsystem": "bdev", 00:20:18.645 "config": [ 00:20:18.645 { 00:20:18.645 "method": "bdev_set_options", 00:20:18.645 "params": { 00:20:18.645 "bdev_io_pool_size": 65535, 00:20:18.645 "bdev_io_cache_size": 256, 00:20:18.645 "bdev_auto_examine": true, 00:20:18.645 "iobuf_small_cache_size": 128, 00:20:18.645 "iobuf_large_cache_size": 16 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "bdev_raid_set_options", 00:20:18.645 "params": { 00:20:18.645 "process_window_size_kb": 1024 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "bdev_iscsi_set_options", 00:20:18.645 "params": { 00:20:18.645 "timeout_sec": 30 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "bdev_nvme_set_options", 00:20:18.645 "params": { 00:20:18.645 "action_on_timeout": "none", 00:20:18.645 "timeout_us": 0, 00:20:18.645 "timeout_admin_us": 0, 00:20:18.645 "keep_alive_timeout_ms": 10000, 00:20:18.645 "arbitration_burst": 0, 00:20:18.645 "low_priority_weight": 0, 00:20:18.645 "medium_priority_weight": 0, 00:20:18.645 "high_priority_weight": 0, 00:20:18.645 "nvme_adminq_poll_period_us": 10000, 00:20:18.645 "nvme_ioq_poll_period_us": 0, 00:20:18.645 "io_queue_requests": 512, 00:20:18.645 "delay_cmd_submit": true, 00:20:18.645 "transport_retry_count": 4, 00:20:18.645 "bdev_retry_count": 3, 00:20:18.645 "transport_ack_timeout": 0, 00:20:18.645 "ctrlr_loss_timeout_sec": 0, 00:20:18.645 "reconnect_delay_sec": 0, 00:20:18.645 "fast_io_fail_timeout_sec": 0, 00:20:18.645 "disable_auto_failback": false, 00:20:18.645 "generate_uuids": false, 00:20:18.645 "transport_tos": 0, 00:20:18.645 "nvme_error_stat": false, 00:20:18.645 "rdma_srq_size": 0, 00:20:18.645 "io_path_stat": false, 00:20:18.645 "allow_accel_sequence": false, 00:20:18.645 "rdma_max_cq_size": 0, 00:20:18.645 "rdma_cm_event_timeout_ms": 0, 00:20:18.645 "dhchap_digests": [ 00:20:18.645 "sha256", 00:20:18.645 "sha384", 00:20:18.645 "sha512" 00:20:18.645 ], 00:20:18.645 "dhchap_dhgroups": [ 00:20:18.645 "null", 00:20:18.645 "ffdhe2048", 00:20:18.645 "ffdhe3072", 00:20:18.645 "ffdhe4096", 00:20:18.645 "ffdhe6144", 00:20:18.645 "ffdhe8192" 00:20:18.645 ] 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "bdev_nvme_attach_controller", 00:20:18.645 "params": { 00:20:18.645 "name": "nvme0", 00:20:18.645 "trtype": "TCP", 00:20:18.645 "adrfam": "IPv4", 00:20:18.645 "traddr": "127.0.0.1", 00:20:18.645 "trsvcid": "4420", 00:20:18.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.645 "prchk_reftag": false, 00:20:18.645 "prchk_guard": false, 00:20:18.645 "ctrlr_loss_timeout_sec": 0, 00:20:18.645 "reconnect_delay_sec": 0, 00:20:18.645 "fast_io_fail_timeout_sec": 0, 00:20:18.645 "psk": "key0", 00:20:18.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.645 "hdgst": false, 00:20:18.645 "ddgst": false 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "bdev_nvme_set_hotplug", 00:20:18.645 "params": { 00:20:18.645 "period_us": 100000, 00:20:18.645 "enable": false 00:20:18.645 } 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "method": "bdev_wait_for_examine" 00:20:18.645 } 00:20:18.645 ] 00:20:18.645 }, 00:20:18.645 { 00:20:18.645 "subsystem": "nbd", 00:20:18.645 "config": [] 00:20:18.645 } 00:20:18.645 ] 00:20:18.645 }' 00:20:18.903 [2024-07-12 16:24:02.388731] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:20:18.904 [2024-07-12 16:24:02.389111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84828 ] 00:20:18.904 [2024-07-12 16:24:02.527343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.904 [2024-07-12 16:24:02.598479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.162 [2024-07-12 16:24:02.713559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:19.162 [2024-07-12 16:24:02.755796] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.727 16:24:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.727 16:24:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:19.727 16:24:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:20:19.727 16:24:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:19.727 16:24:03 keyring_file -- keyring/file.sh@120 -- # jq length 00:20:19.986 16:24:03 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:20:19.986 16:24:03 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:20:19.986 16:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:19.986 16:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:19.986 16:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:19.986 16:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:19.986 16:24:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.245 16:24:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:20.245 16:24:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:20:20.245 16:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:20.245 16:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:20.245 16:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:20.245 16:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:20.245 16:24:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:20:20.811 16:24:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cWXlb6kSkR /tmp/tmp.cM1GkPHYvX 00:20:20.811 16:24:04 keyring_file -- keyring/file.sh@20 -- # killprocess 84828 00:20:20.811 16:24:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84828 ']' 00:20:20.811 16:24:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84828 00:20:20.811 16:24:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84828 00:20:21.070 killing process with pid 84828 00:20:21.070 Received shutdown signal, test time was about 1.000000 seconds 00:20:21.070 00:20:21.070 Latency(us) 00:20:21.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.070 =================================================================================================================== 00:20:21.070 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84828' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@967 -- # kill 84828 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@972 -- # wait 84828 00:20:21.070 16:24:04 keyring_file -- keyring/file.sh@21 -- # killprocess 84569 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84569 ']' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84569 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84569 00:20:21.070 killing process with pid 84569 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84569' 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@967 -- # kill 84569 00:20:21.070 [2024-07-12 16:24:04.744604] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.070 16:24:04 keyring_file -- common/autotest_common.sh@972 -- # wait 84569 00:20:21.328 ************************************ 00:20:21.328 END TEST keyring_file 00:20:21.328 ************************************ 00:20:21.328 00:20:21.328 real 0m15.413s 00:20:21.328 user 0m39.922s 00:20:21.328 sys 0m2.798s 00:20:21.328 16:24:04 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.328 16:24:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:21.328 16:24:05 -- common/autotest_common.sh@1142 -- # return 0 00:20:21.328 16:24:05 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:20:21.328 16:24:05 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:21.328 16:24:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:21.328 16:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.328 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:20:21.328 ************************************ 00:20:21.328 START TEST keyring_linux 00:20:21.328 ************************************ 00:20:21.328 16:24:05 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:21.586 * Looking for test storage... 00:20:21.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0f8ee936-81ee-4845-9dc2-94c8381dda10 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0f8ee936-81ee-4845-9dc2-94c8381dda10 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.587 16:24:05 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.587 16:24:05 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.587 16:24:05 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.587 16:24:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.587 16:24:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.587 16:24:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.587 16:24:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:21.587 16:24:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:21.587 /tmp/:spdk-test:key0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:21.587 16:24:05 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:21.587 /tmp/:spdk-test:key1 00:20:21.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.587 16:24:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84941 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.587 16:24:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84941 00:20:21.587 16:24:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84941 ']' 00:20:21.587 16:24:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.587 16:24:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.587 16:24:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.587 16:24:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.587 16:24:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:21.846 [2024-07-12 16:24:05.314884] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:20:21.846 [2024-07-12 16:24:05.315394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84941 ] 00:20:21.846 [2024-07-12 16:24:05.456773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.846 [2024-07-12 16:24:05.528384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.846 [2024-07-12 16:24:05.562074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:20:22.104 16:24:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:22.104 [2024-07-12 16:24:05.707431] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.104 null0 00:20:22.104 [2024-07-12 16:24:05.739327] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.104 [2024-07-12 16:24:05.739751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.104 16:24:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:22.104 204740974 00:20:22.104 16:24:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:22.104 409746240 00:20:22.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:22.104 16:24:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84950 00:20:22.104 16:24:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84950 /var/tmp/bperf.sock 00:20:22.104 16:24:05 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84950 ']' 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.104 16:24:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:22.104 [2024-07-12 16:24:05.819909] Starting SPDK v24.09-pre git sha1 182dd7de4 / DPDK 24.03.0 initialization... 00:20:22.104 [2024-07-12 16:24:05.820016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84950 ] 00:20:22.362 [2024-07-12 16:24:05.958818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.363 [2024-07-12 16:24:06.016234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.363 16:24:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.363 16:24:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:20:22.363 16:24:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:22.363 16:24:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:22.622 16:24:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:22.622 16:24:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:22.880 [2024-07-12 16:24:06.562146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:22.880 16:24:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:22.880 16:24:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:23.448 [2024-07-12 16:24:06.879323] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.448 nvme0n1 00:20:23.448 16:24:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:23.448 16:24:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:23.448 16:24:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:23.448 16:24:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:23.448 16:24:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:23.448 16:24:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.706 16:24:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:23.706 16:24:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:23.706 16:24:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:23.706 16:24:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:23.706 16:24:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.706 16:24:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:23.706 16:24:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@25 -- # sn=204740974 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 204740974 == \2\0\4\7\4\0\9\7\4 ]] 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 204740974 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:23.964 16:24:07 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:23.964 Running I/O for 1 seconds... 00:20:25.336 00:20:25.336 Latency(us) 00:20:25.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.336 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:25.336 nvme0n1 : 1.01 10407.46 40.65 0.00 0.00 12220.95 3544.90 15490.33 00:20:25.336 =================================================================================================================== 00:20:25.336 Total : 10407.46 40.65 0.00 0.00 12220.95 3544.90 15490.33 00:20:25.336 0 00:20:25.336 16:24:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:25.336 16:24:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:25.336 16:24:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:25.336 16:24:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:25.336 16:24:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:25.336 16:24:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:25.336 16:24:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:25.336 16:24:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:25.594 16:24:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:25.594 16:24:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:25.594 16:24:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:25.594 16:24:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.594 16:24:09 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:25.594 16:24:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:25.853 [2024-07-12 16:24:09.484208] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:25.853 [2024-07-12 16:24:09.484320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7b10 (107): Transport endpoint is not connected 00:20:25.853 [2024-07-12 16:24:09.485306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7b10 (9): Bad file descriptor 00:20:25.853 [2024-07-12 16:24:09.486303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:25.853 [2024-07-12 16:24:09.486330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:25.853 [2024-07-12 16:24:09.486341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:25.853 request: 00:20:25.853 { 00:20:25.853 "name": "nvme0", 00:20:25.853 "trtype": "tcp", 00:20:25.853 "traddr": "127.0.0.1", 00:20:25.853 "adrfam": "ipv4", 00:20:25.853 "trsvcid": "4420", 00:20:25.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.853 "prchk_reftag": false, 00:20:25.853 "prchk_guard": false, 00:20:25.853 "hdgst": false, 00:20:25.853 "ddgst": false, 00:20:25.853 "psk": ":spdk-test:key1", 00:20:25.853 "method": "bdev_nvme_attach_controller", 00:20:25.853 "req_id": 1 00:20:25.853 } 00:20:25.853 Got JSON-RPC error response 00:20:25.853 response: 00:20:25.853 { 00:20:25.853 "code": -5, 00:20:25.853 "message": "Input/output error" 00:20:25.853 } 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@33 -- # sn=204740974 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 204740974 00:20:25.853 1 links removed 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@33 -- # sn=409746240 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 409746240 00:20:25.853 1 links removed 00:20:25.853 16:24:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84950 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84950 ']' 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84950 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84950 00:20:25.853 killing process with pid 84950 00:20:25.853 Received shutdown signal, test time was about 1.000000 seconds 00:20:25.853 00:20:25.853 Latency(us) 00:20:25.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.853 =================================================================================================================== 00:20:25.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84950' 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 84950 00:20:25.853 16:24:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 84950 00:20:26.112 16:24:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84941 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84941 ']' 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84941 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84941 00:20:26.112 killing process with pid 84941 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84941' 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 84941 00:20:26.112 16:24:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 84941 00:20:26.370 ************************************ 00:20:26.370 END TEST keyring_linux 00:20:26.370 ************************************ 00:20:26.370 00:20:26.370 real 0m4.959s 00:20:26.370 user 0m10.201s 00:20:26.370 sys 0m1.329s 00:20:26.370 16:24:09 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.370 16:24:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:26.370 16:24:10 -- common/autotest_common.sh@1142 -- # return 0 00:20:26.370 16:24:10 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:20:26.370 16:24:10 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:20:26.370 16:24:10 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:20:26.370 16:24:10 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:20:26.370 16:24:10 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:20:26.370 16:24:10 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:20:26.370 16:24:10 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:20:26.370 16:24:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.370 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:20:26.370 16:24:10 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:20:26.370 16:24:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:26.370 16:24:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:26.370 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.273 INFO: APP EXITING 00:20:28.273 INFO: killing all VMs 00:20:28.273 INFO: killing vhost app 00:20:28.273 INFO: EXIT DONE 00:20:28.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.838 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:28.838 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:29.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.403 Cleaning 00:20:29.403 Removing: /var/run/dpdk/spdk0/config 00:20:29.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:29.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:29.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:29.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:29.403 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:29.403 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:29.403 Removing: /var/run/dpdk/spdk1/config 00:20:29.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:29.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:29.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:29.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:29.403 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:29.403 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:29.403 Removing: /var/run/dpdk/spdk2/config 00:20:29.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:29.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:29.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:29.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:29.403 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:29.403 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:29.403 Removing: /var/run/dpdk/spdk3/config 00:20:29.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:29.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:29.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:29.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:29.403 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:29.403 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:29.403 Removing: /var/run/dpdk/spdk4/config 00:20:29.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:29.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:29.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:29.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:29.403 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:29.403 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:29.403 Removing: /dev/shm/nvmf_trace.0 00:20:29.403 Removing: /dev/shm/spdk_tgt_trace.pid58735 00:20:29.662 Removing: /var/run/dpdk/spdk0 00:20:29.662 Removing: /var/run/dpdk/spdk1 00:20:29.662 Removing: /var/run/dpdk/spdk2 00:20:29.662 Removing: /var/run/dpdk/spdk3 00:20:29.662 Removing: /var/run/dpdk/spdk4 00:20:29.662 Removing: /var/run/dpdk/spdk_pid58601 00:20:29.662 Removing: /var/run/dpdk/spdk_pid58735 00:20:29.662 Removing: /var/run/dpdk/spdk_pid58933 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59014 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59036 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59151 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59169 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59287 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59472 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59618 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59677 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59740 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59831 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59895 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59928 00:20:29.662 Removing: /var/run/dpdk/spdk_pid59964 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60025 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60114 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60528 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60580 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60631 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60647 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60703 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60719 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60775 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60791 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60842 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60847 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60887 00:20:29.662 Removing: /var/run/dpdk/spdk_pid60905 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61024 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61063 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61132 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61176 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61195 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61259 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61288 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61323 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61357 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61386 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61421 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61455 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61490 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61519 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61553 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61588 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61617 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61651 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61686 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61715 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61755 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61784 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61820 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61859 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61888 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61929 00:20:29.662 Removing: /var/run/dpdk/spdk_pid61988 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62068 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62363 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62375 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62412 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62427 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62443 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62462 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62476 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62492 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62511 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62524 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62540 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62559 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62572 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62588 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62601 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62622 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62632 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62651 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62665 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62680 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62715 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62724 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62754 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62818 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62846 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62850 00:20:29.662 Removing: /var/run/dpdk/spdk_pid62880 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62889 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62897 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62939 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62953 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62981 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62991 00:20:29.921 Removing: /var/run/dpdk/spdk_pid62995 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63004 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63014 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63023 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63033 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63037 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63071 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63092 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63107 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63130 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63139 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63147 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63182 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63199 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63220 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63233 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63235 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63248 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63250 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63263 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63265 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63267 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63341 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63383 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63482 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63518 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63555 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63575 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63592 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63606 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63643 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63659 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63723 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63739 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63778 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63846 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63891 00:20:29.921 Removing: /var/run/dpdk/spdk_pid63920 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64012 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64049 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64087 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64300 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64402 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64426 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64736 00:20:29.921 Removing: /var/run/dpdk/spdk_pid64774 00:20:29.921 Removing: /var/run/dpdk/spdk_pid65054 00:20:29.921 Removing: /var/run/dpdk/spdk_pid65462 00:20:29.921 Removing: /var/run/dpdk/spdk_pid65725 00:20:29.921 Removing: /var/run/dpdk/spdk_pid66442 00:20:29.921 Removing: /var/run/dpdk/spdk_pid67251 00:20:29.921 Removing: /var/run/dpdk/spdk_pid67362 00:20:29.921 Removing: /var/run/dpdk/spdk_pid67435 00:20:29.921 Removing: /var/run/dpdk/spdk_pid68669 00:20:29.921 Removing: /var/run/dpdk/spdk_pid68879 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72202 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72506 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72615 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72744 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72763 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72779 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72805 00:20:29.921 Removing: /var/run/dpdk/spdk_pid72878 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73014 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73158 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73233 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73422 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73505 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73585 00:20:29.921 Removing: /var/run/dpdk/spdk_pid73889 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74257 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74259 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74535 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74553 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74574 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74599 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74604 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74895 00:20:29.921 Removing: /var/run/dpdk/spdk_pid74948 00:20:29.921 Removing: /var/run/dpdk/spdk_pid75221 00:20:29.921 Removing: /var/run/dpdk/spdk_pid75410 00:20:29.921 Removing: /var/run/dpdk/spdk_pid75780 00:20:29.921 Removing: /var/run/dpdk/spdk_pid76288 00:20:29.921 Removing: /var/run/dpdk/spdk_pid77116 00:20:30.180 Removing: /var/run/dpdk/spdk_pid77686 00:20:30.180 Removing: /var/run/dpdk/spdk_pid77688 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79582 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79629 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79691 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79751 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79853 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79906 00:20:30.180 Removing: /var/run/dpdk/spdk_pid79955 00:20:30.180 Removing: /var/run/dpdk/spdk_pid80008 00:20:30.180 Removing: /var/run/dpdk/spdk_pid80321 00:20:30.180 Removing: /var/run/dpdk/spdk_pid81473 00:20:30.180 Removing: /var/run/dpdk/spdk_pid81615 00:20:30.180 Removing: /var/run/dpdk/spdk_pid81858 00:20:30.180 Removing: /var/run/dpdk/spdk_pid82397 00:20:30.180 Removing: /var/run/dpdk/spdk_pid82556 00:20:30.180 Removing: /var/run/dpdk/spdk_pid82712 00:20:30.180 Removing: /var/run/dpdk/spdk_pid82811 00:20:30.180 Removing: /var/run/dpdk/spdk_pid82967 00:20:30.180 Removing: /var/run/dpdk/spdk_pid83076 00:20:30.180 Removing: /var/run/dpdk/spdk_pid83732 00:20:30.180 Removing: /var/run/dpdk/spdk_pid83778 00:20:30.180 Removing: /var/run/dpdk/spdk_pid83813 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84061 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84096 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84130 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84569 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84573 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84828 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84941 00:20:30.180 Removing: /var/run/dpdk/spdk_pid84950 00:20:30.180 Clean 00:20:30.180 16:24:13 -- common/autotest_common.sh@1451 -- # return 0 00:20:30.180 16:24:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:20:30.180 16:24:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.180 16:24:13 -- common/autotest_common.sh@10 -- # set +x 00:20:30.180 16:24:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:20:30.180 16:24:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.180 16:24:13 -- common/autotest_common.sh@10 -- # set +x 00:20:30.180 16:24:13 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:30.180 16:24:13 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:30.180 16:24:13 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:30.180 16:24:13 -- spdk/autotest.sh@391 -- # hash lcov 00:20:30.180 16:24:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:30.180 16:24:13 -- spdk/autotest.sh@393 -- # hostname 00:20:30.180 16:24:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:30.439 geninfo: WARNING: invalid characters removed from testname! 00:20:57.009 16:24:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:58.397 16:24:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:00.945 16:24:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:03.475 16:24:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:06.005 16:24:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:09.284 16:24:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.816 16:24:55 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:11.816 16:24:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.816 16:24:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:11.816 16:24:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.816 16:24:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.816 16:24:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 16:24:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 16:24:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 16:24:55 -- paths/export.sh@5 -- $ export PATH 00:21:11.816 16:24:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 16:24:55 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:11.816 16:24:55 -- common/autobuild_common.sh@444 -- $ date +%s 00:21:11.816 16:24:55 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720801495.XXXXXX 00:21:11.816 16:24:55 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720801495.eTgEvz 00:21:11.816 16:24:55 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:21:11.816 16:24:55 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:21:11.816 16:24:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:11.816 16:24:55 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:11.816 16:24:55 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:11.816 16:24:55 -- common/autobuild_common.sh@460 -- $ get_config_params 00:21:11.816 16:24:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:21:11.816 16:24:55 -- common/autotest_common.sh@10 -- $ set +x 00:21:11.816 16:24:55 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:21:11.816 16:24:55 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:21:11.816 16:24:55 -- pm/common@17 -- $ local monitor 00:21:11.816 16:24:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:11.816 16:24:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:11.816 16:24:55 -- pm/common@25 -- $ sleep 1 00:21:11.816 16:24:55 -- pm/common@21 -- $ date +%s 00:21:11.816 16:24:55 -- pm/common@21 -- $ date +%s 00:21:11.816 16:24:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720801495 00:21:11.816 16:24:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720801495 00:21:11.816 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720801495_collect-vmstat.pm.log 00:21:11.816 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720801495_collect-cpu-load.pm.log 00:21:12.751 16:24:56 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:21:12.751 16:24:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:12.751 16:24:56 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:12.751 16:24:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:12.751 16:24:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:12.751 16:24:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:12.751 16:24:56 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:12.751 16:24:56 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:12.751 16:24:56 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:12.751 16:24:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:12.751 16:24:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:12.751 16:24:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:12.751 16:24:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:12.751 16:24:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:12.751 16:24:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:12.751 16:24:56 -- pm/common@44 -- $ pid=86712 00:21:12.751 16:24:56 -- pm/common@50 -- $ kill -TERM 86712 00:21:12.751 16:24:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:12.751 16:24:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:12.751 16:24:56 -- pm/common@44 -- $ pid=86713 00:21:12.751 16:24:56 -- pm/common@50 -- $ kill -TERM 86713 00:21:12.751 + [[ -n 5148 ]] 00:21:12.751 + sudo kill 5148 00:21:12.763 [Pipeline] } 00:21:12.783 [Pipeline] // timeout 00:21:12.790 [Pipeline] } 00:21:12.810 [Pipeline] // stage 00:21:12.816 [Pipeline] } 00:21:12.835 [Pipeline] // catchError 00:21:12.846 [Pipeline] stage 00:21:12.848 [Pipeline] { (Stop VM) 00:21:12.864 [Pipeline] sh 00:21:13.144 + vagrant halt 00:21:16.430 ==> default: Halting domain... 00:21:21.747 [Pipeline] sh 00:21:22.028 + vagrant destroy -f 00:21:26.210 ==> default: Removing domain... 00:21:26.221 [Pipeline] sh 00:21:26.500 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:26.509 [Pipeline] } 00:21:26.527 [Pipeline] // stage 00:21:26.532 [Pipeline] } 00:21:26.550 [Pipeline] // dir 00:21:26.556 [Pipeline] } 00:21:26.572 [Pipeline] // wrap 00:21:26.579 [Pipeline] } 00:21:26.594 [Pipeline] // catchError 00:21:26.603 [Pipeline] stage 00:21:26.605 [Pipeline] { (Epilogue) 00:21:26.619 [Pipeline] sh 00:21:26.899 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:33.530 [Pipeline] catchError 00:21:33.532 [Pipeline] { 00:21:33.549 [Pipeline] sh 00:21:33.827 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:34.085 Artifacts sizes are good 00:21:34.093 [Pipeline] } 00:21:34.113 [Pipeline] // catchError 00:21:34.125 [Pipeline] archiveArtifacts 00:21:34.132 Archiving artifacts 00:21:34.298 [Pipeline] cleanWs 00:21:34.308 [WS-CLEANUP] Deleting project workspace... 00:21:34.308 [WS-CLEANUP] Deferred wipeout is used... 00:21:34.314 [WS-CLEANUP] done 00:21:34.317 [Pipeline] } 00:21:34.334 [Pipeline] // stage 00:21:34.341 [Pipeline] } 00:21:34.358 [Pipeline] // node 00:21:34.365 [Pipeline] End of Pipeline 00:21:34.411 Finished: SUCCESS